#### PALGRAVE ANIMATION

## **Animating Unpredictable Effects** Nonlinearity in Hollywood's R&D Complex

Jordan Gowanlock

#### Palgrave Animation

Series Editors Caroline Ruddell Brunel University London Uxbridge, UK

Paul Ward Arts University Bournemouth Poole, UK

This book series explores animation and conceptual/theoretical issues in an approachable way. The focus is twofold: on core concepts, theories and debates in animation that have yet to be dealt with in book-length format; and on new and innovative research and interdisciplinary work relating to animation as a feld. The purpose of the series is to consolidate animation research and provide the 'go to' monographs and anthologies for current and future scholars.

More information about this series at http://www.palgrave.com/gp/series/15948 Jordan Gowanlock

# Animating Unpredictable Effects

Nonlinearity in Hollywood's R&D Complex

Jordan Gowanlock Department of Film & Media University of California Berkeley, USA

ISSN 2523-8086 ISSN 2523-8094 (electronic) Palgrave Animation ISBN 978-3-030-74226-3 ISBN 978-3-030-74227-0 (eBook) https://doi.org/10.1007/978-3-030-74227-0

© The Editor(s) (if applicable) and The Author(s) 2021. This book is an open access publication.

**Open Access** This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specifc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affliations.

Cover image: Jordan Gowanlock Cover design: eStudioCalamar

This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG.

The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

### Acknowledgments

I would like to thank Marc Steinberg for his years of advice and guidance on the project that led to this book. I am also grateful to Kristen Whissel and the faculty and graduate students at the UC Berkeley Department of Film & Media for providing such a supportive community, especially Jennifer Alpert for her no-nonsense writing notes. I would also like to thank all the other scholars who provided input on my work, including Haidee Wasson, Charles Acland, Joshua Neves, Julie Turnock, and Zach Meltzer. Thanks as well to the extremely patient Emily Wood, Lina Aboujieb, and the editorial team at Palgrave Macmillan, and to the series editors Paul Ward and Caroline Ruddell. Finally, I want to thank my longsuffering wife Mariana for her constant encouragement and support.

This project was fnancially supported by:

The Fonds de Recherche du Québec Postdoctoral Research Scholarship (B3).

The UC Berkeley Department of Film & Media.

The Berkeley Research Impact Initiative.

### Book Abstract

Uncanny computer-generated animations of splashing waves, billowing smoke clouds, and characters' fowing hair have been becoming a ubiquitous presence on screens of all types since the 1980s. *Animating Unpredictable Effects* charts the history of these digital moving images and the software tools that make them. The book uncovers an institutional and industrial history that saw media industries conducting more private R&D as Cold War federal funding began to wane in the late 1980s. In this context studios and media software companies took concepts used for studying and managing unpredictable systems like markets, weather, and fuids and turned them into tools for animation. *Animating Unpredictable Effects* theorizes how these animations are part of a paradigm of control evident across society, while at the same time exploring what they can teach us about the relationship between making and knowing.

### Contents


### List of Figures


### Introduction: "Fully Nonlinear"

The computer science department at Stanford University offers a course called *Computer Graphics: Animation and Simulation.* Many departments at other universities have offered a similar course, including the University of North Carolina, University of California, California Polytechnic, and Carnegie Mellon. A title like *Animation and Simulation* will sound to some like a betrayal of the principles of animation. To many fans, students, and scholars, animation represents an anarchic, unpredictable, representationally unrestricted form of moving image, while simulation represents the rationalizing numerical authority of objectivity and control. This is not just a course where computer science students learn to make tools for animators to use either. Courses such as these are as much about making moving images as they are about making software, and graduates with experience in this domain are as likely to work for visual effects (VFX), animation, or game studios as they are software companies. Indeed, Stanford's computer science department has a strong connection with special effects studio Industrial Light and Magic. Though this type of animation is easy to dismiss because of its relationship to engineering, this is exactly why we should pay close attention to it. It represents a particular conceptualization of the relationship between engineering and animation production that has been taking shape since the late 1970s. It also provides a window into a paradigm of control shared between contemporary animation and numerous other facets of society that employ simulations of nonlinear systems, which have shaped everything from fnance to the way we understand climate change since the 1940s.

The weekly classes of Stanford's *Animation and Simulation* course include titles like procedural modeling, collision processing, character FX, particle-based fuids, and character animation FX. These types of animations create motion not through the manual control of sequential images but through algorithms that are designed to produce automated unpredictable outputs. Studios often use these methods to animate natural phenomena: the fow of hair, the splash of water, vortices in smoke, or the behavior of groups of animals. Textbooks on computer graphics and animation create a similar grouping of topics.1 Any map of contemporary animation, VFX, or large-budget video game production workfows also includes such a category as its own branch of production. This grouping of techniques and tools goes by several names like, "procedural," "dynamic," "simulated," or "technical" animation. Sometimes they are simply referred to as "FX."

Many observers and critics group these types of animation under the term "physical simulation." But more fundamental to this group than the imitation of physics is their programmed unpredictability, or simulated nonlinearity. In this sense, they are akin to a genre of computer art Frieder Nake terms "generative art," practiced by artist like Georg Nees, Ernest Edmonds, and Nake himself since the 1960s. Yet these forms of animation are not experimental art, they are the product of industrial forces and discourses. Indeed, they are the paradigmatic products of a trend that has seen research and development (R&D) become a substantial part of media industries production and economics. Whereas conferences held by the Association for Computing Machinery (ACM) or Institute of Electrical and Electronics Engineers (IEEE) on computer graphics and simulation were once dominated by federal funding and the military-industrial complex, since the 1980s media industries like Hollywood have become major sponsors of research. This combination of military and media industries R&D produced a very particular way of seeing contingency and seeking to control it.

This book will refer to these tools and production practices as "nonlinear animation," emphasizing that they are in fact a form of animation. They seek to bring images to life, to animate them, as all animated media do. Just as early flm and animation embodied the animating energies of the nineteenth century like clockwork, spiritualism, and electricity, nonlinear animation demonstrates the élan vital of the late twentieth and early twenty-frst centuries.2 When computer graphics researcher and Pixar cofounder Alvy Ray Smith once infuriated Steve Jobs so severely that Jobs stormed out of a company meeting, Smith described Jobs as having gone "fully nonlinear."3 He had become wild, unpredictable, and undeniably animated. This is how someone like Smith makes sense of such chaotic unruliness.

Chaotic motion has always been a key element of cinema's animate vitality. When Georges Sadoul and Georges Méliès frst saw the Lumière brother's 1895 actuality *Repas De Bébé*, their attention was drawn not to the middleclass domestic scene in the foreground of the flm but instead to the background, where the leaves of the trees were being rustled by the wind.4 When our attention is adequately directed, we can still marvel at cinema's ability to capture chaotic, unpredictable motion and events. In his flm *Grizzly Man* (2005), Werner Herzog includes a long shot of grass blowing in the wind, narrating "sometimes images themselves develop their own life." Cinema's ability to capture the unpredictable has always been one of its fundamental properties, even in otherwise artifcial circumstances. This is an important component of Mary Ann Doane's infuential analysis of cinematic time as it relates to industrial modernity.5 In her work, she fnds that contingency had an almost irresistible appeal. The camera was unique for its ability to capture unexpected occurrences like a building toppling over, delivering uncanny effects. Cinema contained contingency in a "representational system while maintaining both its threat and its allure."6

Animators have long been interested in the unique, complex quality of natural movement as well. Disney animators Sandy Strother and Ugo D'Orsi were dedicated to this subject on projects such as *Fantasia* (1940) and *Pinocchio* (1940), where they specialized in animating the complex movement and splashes of water. These animators were seeking to bring water to life. Hand-drawn animation may seem like the antithesis of the captured contingency in *Repas De Bébé,* but their preoccupations are alike. As many flm and animation scholars argue, we need to think beyond a simple dichotomous view of animation and live-action cinema.7 Vivian Sobchack notes that the dynamic between the effortless vitality of animation and the regulated mechanical control of automation has been central to all animated media, including live-action cinema.8 All of these moving images are brought to life, animated, by seemingly unpredictable movement, yet they also entail a different apparatus for shaping that movement and making meaning from it. For the Lumières it was the capture of the camera, and for Strother and D'Orsi it was the manual manipulation of drawn frames by the artist. For nonlinear animation, this animating force is something else, something that sits in-between capture and manual manipulation.

Nonlinear animation sees vitality extending from the unpredictability of dynamic complexity or randomness. Control of this vitality comes in the form of manipulating data parameters. In an example like Disney's digital animated feature *Moana* (2016), the ocean moves in uncannily lifelike ways, yet it also so extensively manipulated it has its own personality, as though it were a character. Artists are able to make the water behave a certain way, sculpting a living, moving thing. Making an animation like this entails not just a different kind of artistic work, but a different relationship between engineering and production. It requires making tools, writing code and scripts, and combining different software and plug-ins. A former Vice President at VFX studio Digital Domain says these jobs require, "a combination of computer scientist and fne artist… the eye of an animator but the brain of a hardcore technologist."9 A Stanford computer science professor and frequent contributor to special effect studio Industrial Light and Magic similarly claims, "a little chaos goes a long way… we've found that less control, better algorithms, and a different breed of artist is the key."10 Clearly there are some industrial promotional discourses working through statements like these, but getting to the bottom of these discourses is key to understanding this form of animation as a product of an industrial-institutional machine that is constantly manufacturing this animated novelty. Embedded in these statements is a particular conception of control and a particular way of thinking about the relationship between image making and technology development.

This discourse of nonlinear control negotiates the already fraught territory of digital animation work. Formerly common representations of allpowerful animators, able to fully control the most minute detail of the worlds they create, have given way to anxieties about being "ousted by technology, made obsolete, or – worse yet – turned into mechanical slaves to digital software."11 Aylish Wood fnds that fears about animators being disempowered by black-boxed technology have shaped the design of animation software interfaces.12 This is why Autodesk's Maya software offers a 3D preview that puts the user in touch with the images they are making, giving a sense of creative control, while its features like the "Channel Box" give the sense of access to deeper software functions.13 Nonlinearity represents a total rethink of this question of artistic control. While its unpredictable autonomy would seem to pose new threats to the artist's agency, it also extends control into new domains. This different understanding of control is linked to a different conceptualization of creativity that puts greater emphasis on making technical apparatuses.

At the heart of nonlinear animation is a way of thinking that seeks to make use of unpredictable nonlinear complexity by shaping it toward specifc applications. This applies both to the way animation and VFX studios build tools to direct the look of simulated images and also to the way they use hands-off management techniques that seek to direct unpredictable labor tasks involving R&D and creativity. This way of seeing and managing the world is imbricated with the development of similar nonlinear simulation approaches in a number of other industries and research disciplines, such as climate science, sociology, geology, management science, and fnancial mathematics.14 Understanding nonlinear animation thus entails understanding a broader archeological layer of knowledge that includes various institutions and forms of organization and management. This epistemic horizon, this episteme, applies not just to our supposedly "post-cinematic" digital lives, but also to the way society sees materiality and material phenomena.

The following chapters will investigate this subject by charting the circulation of ideas, technologies, moving images, and people through contact zones such as the ACM's Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH), using archival research of trade communications, scholarly publications, and conference proceedings, as well as interviews with industry workers. This book is structured into fve parts that draw an arc from the historical and philosophical roots of nonlinear simulation through to representations of these ideas in themes and VFX aesthetics on cinema screens.

Chapter 2 traces the roots of simulation, nonlinearity, and R&D in the nineteenth century and observes their growth in the institutional context of World War II, the Cold War, and after, revealing how nonlinear animation is a product of this history and proposing some theoretical frameworks for understanding how simulated images make meaning. Chapter 3 studies the shifting role of R&D in the flm industry since 1980, explaining the economic and strategic value technology ownership has gained in contemporary VFX and animation. Chapter 4 shifts focus to the more recent past, studying the way technological and software development principles have informed animation and VFX "workfows" and "pipelines," transforming the organization of production and blurring the line between technical and creative work since the 1990s. Chapter 5 keeps this more recent historical focus and studies how the management principles of Pixar have been infuenced by nonlinear animation paradigms, offering an updated, nuanced understanding of post-Fordist control. Finally, Chap. 6 studies flms since 1982 that both feature nonlinear animation and thematically engage topics in nonlinearity such as chaos theory, catastrophe theory, and perfect storms, fnding a complex interaction between the fear nonlinear unpredictability can inspire and the reassuring mastery simulation promises. These chapters represent a range of different conceptual frameworks and methodological approaches to this one subject. Through this, the book offers a more holistic view of how a particular set of animation practices and technologies are interlinked with symbolic, economic, institutional, and discursive historical factors.

No existing term satisfactorily describes what I will continue to refer to as nonlinear animation. But each of the names commonly used to describe some part of nonlinear animation, such as "physical simulation," "technical animation," "procedural animation," and "FX," provide some insight into the particularity of these tools and production practices, as well as the way they are constructed within the industry. Taking a moment to address each of these terms in turn will help us grapple with the complex ontology of nonlinear animation and will highlight what we can learn by studying it.

#### Physical Simulation

A phrase often used by observers to describe nonlinear animation is "physical simulation." This is a term that carries some heavy connotations in media studies. For many, simulation is a marker of false artifce. For example, the term "visual simulation" is frequently applied to many kinds of digital animation to refer to their "perceptual realism." In other words, simulation describes the way these images are cleverly made to look real through perceptual cues like refection and shadow, when they are in fact mere fakery.15 Simulation is also a key term of postmodern critique, used to describe the artifciality of late capitalism.16 Yet simulation has gradually become an important way to understand the world and to confront the limits of understanding, yielding an everyday utility that we cannot quickly dismiss as mere fakery. Parsing the meaning of simulation is vital to getting to the bottom of how nonlinear animations make meaning.

Some nonlinear animations are based on modifed physics equations. Some imitate other kinds of processes in nature, such as the patterns of branch and leaf growth or the behavior of groups of animals. Many seek to represent natural phenomena without being grounded in real-world science, and others are completely abstract. Simulations can range in their fdelity to the mechanism they imitate. The feld of ludology, an early subset of game studies, has already demonstrated this.17 Games produce meaning through programmed rules and structure. Those game mechanics may refer to real-world mechanics, like the way *Monopoly* is about real estate economics, but they do not necessarily try to imitate that real-world mechanism they are representing exactly. Games can use simulation to be expressive or imaginative, rather than realistic. Simulations thus require us to think through a different epistemic frame that understands the world through making models rather than through sensation or recording. Historians of technology and science such as Walter Vincenti, Mario Bunge, and Herbert Simon have theorized this form of engineering epistemology, which creates knowledge through "knowing how" instead of "knowing that." Chapter 2 will engage these ideas to develop a framework for how nonlinear animations make meaning as simulations.

#### Technical and Procedural Animation

"Technical animation" is a screen credit sometimes given to workers in VFX and animation studios that work on "character FX" like hair, fur, and cloth. The use of this term points to the special technical skills the workers have, and it identifes a type of animation where every job requires customization and R&D. In contrast to other character work like creating models or manipulating those models directly, nonlinear animation entails confguring software and programming simulations with algorithmic rules or procedures: creating a technological apparatus that will in turn create movement. This emphasis on tool building as a component of image making modifes the labor division between creative work and technical work studied by scholars like Vicki Mayer.18 As Chap. 4 will discuss, there is a very blurry line between nonlinear animation artists and technical directors. This is a discursive shift within the industry, but it also betrays an epistemic shift from the aforementioned "knowing that" to "knowing how," in other words, from creating images of the world to creating models of how it might function.

Nonlinear animation might seem to be furthering the trend of obfuscating the work of animation workers (especially international ones) behind seemingly automated technologies, much the way depictions of performance capture elide the work of animators.19 Focusing on the technical work integrated into production helps to reveal this labor though. Indeed, it may even help reveal some of the labor obfuscated by performance capture. Because, while the myth of an automated capture system obscures the work of the animators who modify and sometimes replace capture data, beyond these workers there are also numerous technical staff creating pipelines, modify data, and upgrading and maintaining equipment.

Beyond its creative value, technical work has economic value both because of the images it produces and because of the technological intellectual properties it can lead to. Chapter 3 will explain how economically and strategically important the development of new technology has become for large VFX and animation studios. These studios proft from developing and owning technologies on every step of their journey from novel emergence to standardization and dominance, proceeding on what Tom Gunning calls, "the cycle from wonder to habit."20 Technological change and R&D have been a part of flm industries since the days of Thomas Edison, and special effects have always played a key role in negotiating technological change.21 As have animation studios like Disney.22 Yet terms like technical animation and procedural animation point to a historically specifc shift in the way technology and R&D are constructed within these industries.

Contemporary animation and VFX studios like Industrial Light and Magic and Pixar frequently promote their cutting-edge technology and the way they integrate creativity with technical innovation. This Silicon Valley-informed discourse sees both creative and technological advances as the product of entrepreneurial innovation that disrupts the ossifed structure of large existing businesses and institutions.23 Large studios do invest a great deal of money and effort into technology development, yet the realities of their R&D contradict these Silicon Valley values in many ways. For one, supporting R&D has meant creating strong connections with public and non-proft research institutions. These connections are quite apparent at the ACM's annual graphics conference SIGGRAPH. Indeed, media industries' voracious appetite for nonlinear simulation researchers has largely replaced the once central role Cold War military funding played in supporting research. This new R&D complex between media industries and research institutions affrms the value of the government's role is supporting research, undermining the idea that nimble start-ups are the prime source of innovation. These fndings echo economist Mariana Mazzucato's work on Apple's reliance on government-funded research.24 Furthermore, the development and ownership of technologies is something only the largest VFX and animation studios can do, and they use their ownership strategically to maintain control of the market, undermining competition. This strategy works in congress with other market-controlling tactics. Thus, the realities of R&D do not align with the myths of Silicon Valley.

Studios in fact use nonlinear animation paradigms to manage these contradictions. As Chap. 5 argues, Pixar Studio's approach to management is modeled around the concept of nonlinear control. They create the conditions for the unpredictable and unexpected, but they also contain this chaos within carefully engineered parameters. They style the animating force of nonlinearity as a source of creativity and innovation. Through this they are able to construct an image of themselves as an innovative Silicon Valley business while also being a gigantic, controlling force in their industry.

The Silicon Valley ideology that fuels so many animation and VFX studios favors a more technological determinist view, as Richard Barbrook and Andy Cameron describe.25 Yet the concept of technical animation demonstrates the interactive relationship between society and media technology development. As Raymond Williams notes in his work on television, R&D is a key site where we can observe society's infuence on the shape of media technologies. Media like television were "looked for and developed with certain purposes and practices already in mind," and R&D is one place where those social desires were turned into reality.26 Thus, flm production is not simply being transformed by the introduction of new external technologies; these tools are being shaped by the demands of studios.

Media R&D also does not operate in a vacuum. It works in concert with other technological and scientifc research felds. Nonlinear animation has a close relationship with similar tools used everywhere, from sociology to geology. Chapter 2 explores these connections in detail. While these connections sometimes fuel industry promotional rhetoric that positions a studio's technology as cutting-edge, what this really shows us is connections across an archeological layer of history. As Thomas Elsaesser notes, thinking archeologically about cinema leads us to pay closer attention to the "S/M (science and military) perversions" of cinema: different conceptualizations and uses of the moving image in science, medicine, surveillance, and military applications.27 We have historically neglected these "parallel histories" of cinema. The relationship between media industries and the institutions and businesses that sponsor nonlinear simulation R&D form a kind of epistemic feedback loop, with neither technology nor practices nor discourses nor institutions being the sole source of the conditions of knowledge, but instead with each feeding into the other. This book seeks to uncover the "archive" of this place in history, in the Foucauldian sense of the word, the "system of enunciability," the totality of both knowledge and power.28

#### FX

FX is increasingly becoming the most popular industrial term to describe nonlinear animation. On the one hand, it is probably the least descriptive of all possible terms. It does not tell us anything about how nonlinear animation works. But it does tell us a great deal about the role it plays in contemporary production. One might assume that FX, a seemingly unnecessary short form of "effects," would refer to any visual or special effects, but the term has gained its own particular meaning, distinguishing nonlinear animation as a special form of production within already special modes of production.

The term FX points to a key question haunting the study of contemporary special and visual effects. At the 2013 *Magic of Special Effects* conference held in Montreal, a preponderance of scholars addressed the question of whether *special effects* is the correct term to use to describe their object of study. How special are special effects anymore? Were they ever special or exceptional? There is little difference between digital post-production work like color correction and VFX work like keying-in backgrounds, and digital post-production is quickly replacing many flmmaking jobs that we once considered standard. This lead scholars such as John Belton to conclude that special effects are no longer special but standard practice.29 This diffcult distinction applies not just to special effects versus standard flmmaking practices but also to the distinction between animation and VFX. In his infuential book *Digital Visual Effects in Cinema* Stephen Prince argues that animation and VFX now belong to one single, large, undifferentiated group of motion graphics*.* 30 In other words, digital tools having fattened former conventional differences. Other scholars have contested this assumption, however. Julie Turnock contends that there are still important conventional differences between VFX and animation production practices.31 So which is it? Do old categories still matter or have all images become the same and thus eliminated any special categories of production?

There is a third possibility we could consider, that, as Wendy Chun argues, digital technology is producing divergence and variety rather than convergence.32 New forms and differences are taking shape with their own specifcity and do not necessarily conform to old categories. FX (nonlinear animation) cuts across categories like animation, VFX, and video games, but it represents a discrete category of image making within these different felds. Every large VFX, animation, or game studio will have an FX department. As one artist who worked in both VFX and animation explained to me, FX is the extra fair, "the icing on the cake*"* of a VFX or animation project.33 FX demarcates a special kind of image making within industries that we would already consider special.

The terms used to describe nonlinear animation give us clues as to what exactly makes it special. The work of producing these animations entails making unpredictable simulations rather than directly controlling the image. This work is technical in nature, putting particular emphasis on making and customizing software. Nonlinear animation is to animation and VFX what special effects used to be to cinema, an exceptional practice that puts particular emphasis on unconventional technical work and custom solutions for a particular effect.

Media industries have been seeing a broad shift in production labor over the past few decades. More and more flm production work is being done by ranks of technicians sitting behind computers, and the ascendance of media forms like video games, which require extensive technical work, has further fuelled this trend. It has also become a commonplace for observers and critics to bemoan the lack of risk-taking in industries like Hollywood, especially in their most VFX and animation-laden features. It is easy to see the minute control and techno-centric nature of VFX and animation as an extension of this.34 These highly technical VFX-laden productions do not abhor risk though; they conceive of risk differently. They know the value of novelty, of surprise, and chaos, and they have developed strategies for occasioning that contingency in such a way that they can control it. The risk we can see on screen takes the shape of nonlinear animation, of explosions, smoke, and water that look just chaotic enough to be uncanny, yet which can also be shaped by artists. Films such as these support extensive R&D work that has uncertain outcomes but pays potentially great economic and competitive dividends. This way of thinking about risk suffuses numerous facets of economics and management beyond media industries. In the same way Mary Ann Doane analyzes cinema's relationship to contingency in the context of industrial modernity, nonlinear animation can be studied as the product of broad historical epistemic change.35 Nonlinear animation represents an approach to risk present across society that we can see at work in movies of the past four decades as an animating force. Rather than being a betrayal of the cinematic tradition, it is a new chapter that responds to its historical context in the same way cinema always has, and it represents a repetition of the vitalizing, enlivening force of animation.

#### Notes


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

### Simulation and R&D: Knowing and Making

As a concept infected by the legacy of postmodern theory, simulation's epistemic nuance and historical signifcance are easily underestimated. This is particularly true for cinema and media studies. Baudrillard's theory that we constructed "models" of reality so extensive that signs have begun to refer to them, rather than reality itself, applied all too readily to the emergence of new digital tools, especially ones that seemed to create images at once alarmingly photorealistic yet also alarmingly artifcial. The 1994 release of the English translation of *Simulacra and Simulation* coincided almost perfectly with the special and visual effects in flms like *Jurassic Park* (1993) and *Forest Gump* (1994) and with the emergence of the frst computer animated feature *Toy Story* (1995). Although the infuence of postmodern theory has waned signifcantly since, critics continue to describe many forms of digital animation and VFX in terms of Baudrillard's concept of hyperrealism, and simulation continues to be used as a negative term to refer to artifciality.

This disposition toward digital VFX and animation has been buttressed by changes in flm production that have seen a shift to post-production and technical work, where flmmaking increasingly involves engineering software, building plug-ins, programming, and writing software scripts. This less geographically and temporally specifc form of media labor, where workers sit rank-on-rank in front of computer screens making and using software, has in many cases replaced more immediate (and unionized) forms of production work. New businesses driving growth in the industry like Pixar, Sony Pictures Imageworks, and Netfix, are as much tech companies as they are studios. They fund research and development (R&D) and defend their technical intellectual property to get an edge on the competition.

What if we took these concepts of simulation and technological artifciality and treated them more complexly? For historians and theorists of technology, an artifce is a made thing: a tool, an apparatus, or a machine. This is a defnition that dispenses with negative connotations of the word relating to deception, focusing instead on the Latin roots of ars (skill or craft) and facio (to make a thing). Herbert Simon, Mario Bunge, and Walter Vincenti have all developed ways of theorizing the knowledge produced by making artifces such as simulations. This approach upends the typical construction of engineering as the mere application of scientifc knowledge. What would happen if we used an example like nonlinear animation to apply such an approach to cinema and media in general? We could look at the scientists and technologists developing animation tools and treat their work seriously as a particular way of seeing the world with its own potentialities. We could introduce new institutional and industrial R&D histories to add nuance and detail to a moving image culture too often defned by the telos of "the digital." We would uncover a richness of meaning in many of global Hollywood's most apparently hyperreal products.

Nonlinear simulations are models used to test theories about how unpredictable phenomena work. I use this term in a non-technical sense, to refer to any simulation of nonlinear phenomena. Some of these simulations may not be truly nonlinear themselves. A nonlinear simulation might test a theory about the fow of air or water, the way crowds behave, changes in the weather, or changes in the stock market. These tools have become central to different disciplines and industries, including management science, fnancial mathematics, meteorology, video games, computer art, and cinema. Studying nonlinear simulation's use in cinema and revealing the connections with other uses in other felds offers insights into an epistemic paradigm that has slowly begun to shape innumerable facets of modern society. Nonlinear simulation is a way of thinking about contingency and control that is deeply embedded in military and industrial applications and in cybernetic discourse. Its practitioners often use it to manage and exploit the unpredictable. But nonlinear simulations can also be built for fctional uses, to speculate and imagine.

The various nonlinear simulation tools used in the flm industry thus offer a range of complex meaning that requires extensive historical and theoretical unpacking, and their study uncovers unmarked connections between cinema and other forms of simulation-based media such as video games and computational art. Rather than using simulation as a negative term to mean mere symptomatic postmodern artifce, the concept of simulation is vital for understanding a mode of image making that has become increasingly popular on cinema screens and that increasingly defnes the way businesses and institutions see the world.

#### Simulation and Nonlinearity

On its own, simulation is a rather imprecise term. Its Latin root, similis, means likeness or similarity. In common speech, simulation can simply refer to representation by what C.S. Peirce would call an "iconic" form, that is to say, the representation of something by means of similitude, or as Peirce puts it, by "a mere community in some quality."1 A person might, for example, *simulate* the barking of a dog by attempting to make a sound like barking. Simulation refers to a much more specifc concept in the context of twentieth century science and engineering though. Here simulation refers to making a functioning model of some kind of system or process in order to understand it. A model is a description of a theoretical mechanism. For example, the theory that water is recycled between land and ocean is supported, in part, by a model that accounts for water evaporating from the oceans, condensing in the atmosphere, and running off back into the ocean. Scientifc simulation refers to putting a model into motion to test the validity of the underlying theories. For example, you could build a glass box with water and earth in it to test the aforementioned water cycle model. In other words, you make an artifcial mechanism to represent a real mechanism. Stephan Hartmann defnes simulation as "the imitation of a process by another process."2 Simulations mostly model physical processes, but the parameters of a simulation need not necessarily be physics. For one, they can be guided by some sort of exotic physics on an astronomical or subatomic scale, but simulations can also be used to model processes like animal behavior, traffc jams, or evolution. They can even be fctional or speculative.

Simulations do not necessarily offer the same forms of evidence as experiments. This is why some philosophers of science, like Eric Winsberg, argue for the need to see simulation as a new form of science that is neither theory nor experiment.3 If philosophers of science have observed the need to theoretically grapple with simulation and its evidentiary value, what theoretical and epistemic issues might it raise for media? These questions have become particularly pressing in light of the proliferation of simulations in science, engineering, and media over the past seventy years, and in light of the increased complexity afforded to them by computer technology.

A scientifc simulation can be rather simple, especially when it seeks to model a regular, predictable process. For example, an orrery, a physical model of the solar system, simulates how the planets move around each other. This is a relative predictable, linear simulation. The movements of the planets are, after all, are as predictable as the rising and setting of the sun or the movement of the tides. Most processes in the world are not so predictable. Changes in weather, for example, are nonlinear. In a nonlinear system, you may start with relatively simple conditions, but the result of those conditions can be wildly different. You cannot deterministically predict a single outcome based on initial conditions. As computer scientist Melanie Mitchell puts it, "A linear system is one you can understand by understanding its parts… A nonlinear system is one in which the whole is different from the sum of the parts."4 Many different research disciplines have used this concept of nonlinear simulation as a tool for understanding unpredictable things in the world, and different industries have sought to beneft from this research for both the purposes of prediction and control, seeking a way to understand and in some way shape the unpredictable. Starting in the 1980s, animation and VFX studios began using these nonlinear simulations to make animations (Fig. 2.1). Like their nonlinear simulation predecessors and contemporaries, these new forms of nonlinear animation mediate concepts like chance, risk, contingency, emergence, and control in a very particular way. And like scientifc uses of simulation, which create a new space between experiment and theory, these new forms of media confound existing media categories, opening a space in-between automated capture and manual manipulation.

#### Stochastic Simulation

There are two primary forms of nonlinear simulation that each take different approaches to modeling unpredictability: stochastic and dynamic. The history of each reveals their own epistemic complexity, the varied subjects they have been used to understand, their fctional and imaginative use, and their eventual adoption as an important part of the animation and VFX

**Fig. 2.1** Nonlinear simulation: techniques, tools, and applications

industries. These histories both start in early modernity with exotic concepts formed by mathematicians, but nonlinear simulation only really took hold as an infuential way of seeing the world in the context of Second World War and Cold-War state-sponsored R&D. From there it spread from within the nascent feld of computer science to fundamentally transform numerous facets of society. Though computer technology and nonlinear simulation were both shaped by the R&D institutions that supported them, and their histories are deeply interlinked, these two technologies are not the same. Stochastic and dynamic simulations identify a more precise way of seeing certain forms of contingency and they are not always necessarily digital in nature. Thus, they offer a new historical context and demand a new theoretical framework for studying related flm technologies and production practices from the past few decades.

Stochastic simulation is slightly simpler, so it is the more logical place to start. A stochastic process is, put simply, a simulation with a random variable in it. Stochastic simulations use randomness to model the non-determinism of a nonlinear system. While it may sound absurdly reductive to model an unpredictable process through the use of proverbial dice rolls, it is a concept that has had a profound effect on how society sees the unpredictable and complex.

It would be diffcult to determine the frst time someone used randomness as a stand-in for unpredictability, but this concept took on a particular meaning and utility in industrial modernity. Several versions of this concept appeared in the frst decade of the twentieth century. The earliest was a model for understanding a phenomenon called Brownian motion. In the early nineteenth century, botanist Robert Brown observed that pollen suspended in water moved on an unpredictable and seemingly random path. What he was seeing was the effect of water molecules bouncing around and unpredictably hitting the pollen from different directions, imparting different vectors of momentum.5 Some seventy years later French mathematician Louis Bachelier was the frst to formulate a model for this unpredictable movement in 1900.6 Brownian motion posed a particular problem because it was the result of a process too complex to completely model. Bachelier modeled it using a *random walk* where, rather than calculating the collisions of myriad particles, a random direction is given to the pollen at a given or random interval. In other words, if you take a moving point and choose random directions for it, you will produce a path like pollen suspended in water without having to simulate millions of molecular collisions.

Bachelier believed this concept could be applied to other unpredictable phenomena. He attempted to use it to model the change in stock prices, for example. If you take what you know about the factors infuencing the change in value of a commodity, and then put a random variable in to simulate the unpredictability of the real world, you can model its movement and predict a range of possible outcomes. At the time, Bachelier's economic theories garnered little interest, but they essentially describe contemporary approaches to options pricing in the feld of fnancial mathematics.

Stochastic simulation exploded in popularity in the context of a surge in federal R&D funding during the Second World War and Cold War that also produced the reprogrammable computer and cognate concepts like systems theory. In the mid-1940s, Los Alamos National Laboratory researchers Nicholas Metropolis, Stan Frankel, and Stanislaw Ulam were trying to predict the paths of neutrons in a nuclear fssion reaction, a problem that they found could not be solved through linear means. The issue was that neutrons bounce around in unpredictable ways, not unlike Brown's pollen molecules. Nuclear scientist Enrico Fermi suggested they try a randomized method, where they would simulate numerous paths based on a random factor, generating a wide variety of outcomes that could be statistically analyzed in aggregate. The process would not produce a single deterministic answer, but a range of statistical likelihoods. Fermi had actually been attempting this technique using a mechanical device of his own invention back in Italy.7 Los Alamos consultant John von Neumann suggested they use the Electronic Numerical Integrator and Computer (ENIAC), the frst programmable electronic computer, to run these simulations as a sort of test for the new machine. The ENIAC was designed and built for the purpose of calculating fring tables for the Ballistics Research Laboratory, but it seems von Neumann was keen to explore its potential. The team at Los Alamos dubbed their new process the "Monte Carlo method," based on the idea that it employed randomness like that of casino games.

Stochastic simulation has since become a key tool for prediction and control in several disciplines, shaping society's relation to risk and uncertainty. Two felds that have made extensive use of stochastic simulation since the 1940s are management science and fnancial mathematics. Management science grew from these early activities at Los Alamos and from contemporaneous research in the feld of logistical "operations research." Large institutions such as the United Steel Company, the U.S. Air Force, and General Electric were keen to explore the potential benefts of these concepts and supported early research.8 These institutions valued nonlinear simulation both for its predictive capacity and for its ability to test systems against unpredictable events. The use of concepts like the Monte Carlo method and stochastic discrete-event simulation allowed organizational structures to cope with unpredictability in coordinated systems like supply chains. Nonlinear simulation is thus a powerful tool for testing management systems against the unpredictability of reality. These disparate experimental programs eventually crystalized into the feld of management science (not to be confused with scientifc management) in 1959.9

These developments in management science were soon followed by developments in other felds such as fnance, where Louis Bachelier's idea for calculating fnancial risk using stochastic simulation fnally found broad acceptance in the 1970s, some twenty-fve years after his death, with the development of the Black-Scholes model.10 Just as Bachelier reduced the complexity of water molecules to a random factor, so too did Fischer Black and Myron Scholes condense the unruly unpredictability of the world to randomness. Like the Monte Carlo method, the Black-Scholes model takes a statistical sample of many discrete simulations in order to fnd a range of future outcomes. This new approach to determining options pricing changed the face of modern fnancial economics.11 The Black-Scholes model effectively gave speculators a range of outcomes that they could reasonably expect. It could not tell speculators exactly what would happen, but it could give them a range. Risky wagers that were once considered tantamount to gambling suddenly became quantifable. While felds like management and fnance once abhorred the unpredictable, stochastic simulation transformed their approach.

While stochastic simulation has grown as a useful, functional part of contemporary industry, its use in fctional and imaginative applications is just as historically signifcant. Both of these parallel histories inform the use of nonlinear animation in contemporary cinema. Indeed, these two genealogies reveal new connections between forms of media like computational art, video games, and cinema. The fctional use of stochastic simulation is even older than Bachelier's work on Brownian motion. Games of chance that use dice or cards have a history that stretches back well beyond modernity, all the way to the ancient use of sheep knucklebones as proto dice. Yet games of chance are not necessarily simulations. The frst example of a game that sought to model real world phenomena while also using dice was likely the Prussian "kriegspiel." In the late eighteenth century, Prussian entomologist Johann Hellwig came up with the idea of using a game like chess to model famous historical battles as a sort of hobby. Later, Georg Reiswitz Jr., a former artillery offcer and son of game designer Georg Reiswitz Sr., introduced the idea of using dice. His reasoning was based on his experience with the "uncertainty" of artillery accuracy.12 Even the best artillery crew will sometimes hit and sometimes miss their target. If a simulation of battle seeks to refect reality, he reasoned, it must therefore simulate this uncertainty.13 During the Napoleonic wars, the Prussian state embraced this concept as having legitimate strategic value as part of military science. The adoption of the kriegspiel was part of a combination of new technologies that were drawing the attention of modern militaries, including recent advances in cartography, the application of statistics, and Daniel Bernoulli's work on the principles of probability.14

While the kriegspiel oscillated back and forth between more fctionalized, playful uses and more practical, serious ones, it laid the conceptual groundwork for a world of fctionalized stochastic simulation games. Elizabeth Magie's *The Landlord's Game* from 1904, for example, applied the economic theories of Henry George and incorporated the randomness of dice rolls to simulate a real estate market. While it was intended as a learning tool and it was largely used in universities at frst, a later iteration in the form of *Monopoly* defnes the board game for many today. The Prussian kriegspiel also acted as inspiration for tabletop battle games published by Avalon Hill starting in 1952, as well as role-playing games like Gary Gygax's *Dungeons & Dragons* in 1974.15 These games would in turn inspire the frst mainframe-based role-playing video games of the 1970s such as *Moria* and *Avatar*. Stochastic calculations continue to play an important role in digital games of all kinds, now popularly referred to as RNG (random number generator) by players, who facetiously pray to "RNJesus" when confronted with simulated chance in a game.

Stochastic simulation has also infuenced different forms of generative art. In the 1950s, John Cage used the concept of chance as a part of musical composition. As early as 1965 artists such as Georg Nees and Frieder Nake were using stochastic computer programs to create what they termed generative art or artifcial art (künstliche kunst).16 Since then stochastic concepts have inspired a great deal of artistic experimentation with computers, including fractal art, which reached a wide range of audiences in the 1980s and 1990s. While academics and researchers were generally responsible for early generative computer art because they had access to computers, more recent open-source software such as Processing and Context Free have brought it to millions of users.

In 1982, these diverse creative, imaginative, fctional, institutional, military, and industrial uses of stochastic simulation found their way onto cinema screens for the frst time via computer graphics in Lucasflm's "genesis sequence" in *Star Trek II: The Wrath of Khan*. This animated VFX sequence depicts a planetary-scale explosion and subsequent terraformation within the diegetic frame of a computer visualization. Stochastic simulation played a role in two aspects of this effect. First, the flm features a seemingly endless mountain range where each peak and valley looks different. These 3D models were created using a technique invented by Boeing engineer and Pixar co-founder Loren Carpenter.17 Rather than repeating the same shapes over and over, or spending endless hours modeling the landscape by hand, Carpenter's technique used stochastics to create the landscape automatically. The flm also features clouds of individual particles meant to look like an explosion, where each individual particle travels on its own unpredictable path. This was the result of a stochastic technique developed by another Pixar co-founder, William T. Reeves.18

Pixar would continue to develop these and other techniques based on stochastic simulation. The following year they would begin to use it in ray tracing, a key early technology for 3D rendering.19 Ray tracing simulates the way light interacts with materials by calculating factors like refection, refraction, scattering, and dispersion, throwing in randomness to substitute for the subtle details and variations that affect these phenomena. In some cases, stochastic simulation has been overshadowed by more complex simulations that model the dynamic forces at work. Yet it continues to be useful for certain applications, especially ones where detail can be traded-off for effciency. Ray tracing is still a fundamental part of 3D rendering, and stochastic simulation is still used in the more recent spectacular effects such as the shattering of rigid objects, which requires creating the random path of a crack in the uniform digital surface of a breakable object like a vase or a statue.20

For over a century people have been using stochastics to model unpredictable phenomena. This concept took on particular meaning in the context of industrial modernity and in the context of new scientifc and organizational felds that were supported by governmental and industrial R&D during the Second World War. It has since been enshrined as a vital part of felds like fnancial mathematics and management science. At the same time, artists and media industries have adopted it for fctional uses. Both scientifc and fctional uses of this concept entail a certain way of mediating contingency and complexity. While mathematical probability seeks to quantify the unpredictable, stochastic simulations embrace it, if only to seek further control in the end. This is particularly evident in examples like the VFX in *The Wrath of Khan,* where an important part of the visual appeal of the effects is their unpredictability. This logic is also at work in the case of dynamic simulation.

#### Dynamic Simulation

While stochastic simulation deals with complexity by substituting it with randomness, dynamic simulation instead models that complexity. To understand the nature of dynamic simulation one must go back to Bachelier's mentor Henri Poincaré and his solution for the "three-body problem." This classic problem sees three planets in each other's gravitational felds, with each body infuencing the other in turn. The diffculty of the problem stems from the fact that every force exerted from one body onto another feeds back to the frst body via the mediation of the third. The problem cannot be *solved* in the traditional deterministic sense, based on initial conditions. This is a case of dynamic complexity. Poincaré's solution was to describe the range of possible outcomes, but another way to model this problem is to use a continuous dynamic simulation. A continuous dynamic simulation would constantly take the resulting forces and re-input them into the problem, continuously revising the conditions. Dynamic simulation has become infuential in physics, engineering, meteorology, and sociology, transforming different facets of society, just like stochastic simulation. And just like stochastic simulation, dynamics have given rise to new forms of media with new ways of imagining the unpredictably complex.

A good way to identify a nonlinear system is to ask whether the past predicts the future. Financial markets are an example of this. Though we might use past information to build predictive models, the market's past behavior does not tell us what it will do tomorrow. Another classic example of this is the weather.21 Founder of modern metrology Vilhelm Bjerknes identifed this challenge in 1904, when he likened it to Poincaré's threebody problem.22 It was not until the 1940s, however, that researchers began to engage this problem through dynamic simulation. The complex problem that Bjerknes laid out was too great a temptation for researchers working with the earliest computers with the backing of a wartime government. John von Neumann described it as "the most complex, interactive, and highly nonlinear problem that had ever been conceived of."23 In 1946, von Neumann and fellow researcher Jule G. Charney organized a research group to explore computer weather simulation at The Princeton Institute for Advanced Study using grants from the Navy and Air Force, and soon after the ENIAC (the same computer used for the Monte Carlo method) was producing short predictions with relative accuracy.24 This research lead to leaps in the understanding and modeling of weather systems and was followed by many other developments by the likes of Edward Norton Lorenz, who would develop his "chaos theory" based on weather simulations.25

At the same time computational meteorology was taking shape as a research discipline, other scientists were forming a parallel branch of research at Los Alamos named the T3 Group, which focused on the dynamic modeling of fuids of all kinds. This research took much the same approach as weather modeling because the problem was basically the same. Both approached dynamic phenomena by breaking them into cells and calculating the vectors of movement of those cells based on factors like momentum and pressure difference.26 Research into computational fuid dynamics (CFD) promised benefts for engineering and design. For example, while high-speed aircraft designs were tested using physical models and wind tunnels at that time, CFD promised the ability to virtually test designs. CFD also allowed for the simulation of combustion in internal combustion, jet, and rocket engines. CFD was eventually packaged into multipurpose engineering software that could be used by a variety of industries, such as Klaus-Jürgen Bathe's ADINA software, developed in 1974 while he was at MIT, and applications like PHOENICS, Star-CD and Fluent, all developed by scholars who had worked at Imperial College's CFD Unit in the late 1970s.

As fuid simulation was becoming enshrined as standard practice in engineering, forms of visual media began to take a similar epistemic approach to fuidity. In the 1990s, studios and software companies began to adapt CFD technologies for animation and VFX, with tools like Arete's Digital Nature Tools, Next Limit's Real Flow, Exotic Matter's Naiad, and Digital Domain's FSIM and STORM. All these pieces of software animated the motion of nonlinear phenomena like splashing water or clouds of smoke. These animation tools basically share the same user environment and principles of simulation as the ones designed for commercial design and engineering. The one subtle distinction between media and engineering applications is that engineering puts a strong emphasis on fdelity, empirical reliability, and prediction, while animation and VFX tools are more preoccupied with simulation speed and the "directability" of simulations. As a result, simulation software for VFX and animation diverged somewhat from engineering and scientifc research tools. This is not to say that there are no more connections between the flm industry and scientifc research though. As Chap. 3 will demonstrate, there are constant transactions between flm and other industries through professional organizations, academic institutions, and the circulation of researchers. Scientifc applications often employ spectacular and cinematic images, while media industry applications often promote their scientifc realism.

Another parallel line of dynamic simulation research was focused not on fuids or weather but on agents and evolving systems. This type of simulation developed into a different set of nonlinear animation tools. The point of origin for this research was John von Neumann's concept of *cellular automata*, which he formulated in the same era he was directing research groups on weather simulation. The principles of cellular automata are deceptively simple: a given grid has either black or white squares, and the squares change state depending on rules about the state of their neighboring squares. This is an example of a dynamic system because one square can change the conditions of the others, which in turn change the conditions of the initial square. As with weather and fuid simulation, simple conditions lead to dynamic complexity. This concept would later be developed by the conceptually similar technologies of agent-based simulation and evolutionary simulation (genetic algorithms), technologies which would be used for sociological research, infrastructure planning, architecture, and certain kinds of nonlinear animation.

An agent-based simulation entails putting many virtual agents with a given set of behaviors into a world with specifc rules and seeing how they will interact with each other and their environment. Researchers employed this as a way of studying the dynamic behavior of populations. If you believe certain rules govern behavior, you can run a simulation to see what sort of behavior results from those rules. For example, in 1971 T.C. Schelling published a study where he attempted to understand neighborhood racial segregation by designing a grid with two different kinds of agents that were programmed to move if a certain number of the other kind of agent lived next to them. This research was conducted at Harvard and sponsored by the RAND Corporation.27

An important event in the development of this type of research was the formation of the Santa Fe Institute. Founded in 1984, predominantly by researchers from the Los Alamos National Laboratory, the Santa Fe Institute funds and facilitates research on various topics that employ complexity as a research paradigm. While the institute has contributed to subjects relating to physics and theoretical mathematics, it has also sponsored several agent-based and evolutionary simulations. For example, the institute co-sponsored a project called Sugarscape by Joshua M. Epstein and Robert Axtell, a simulation where agents gather, trade, consume, and excrete a consumable commodity. Through their research, they hoped to learn about humanities' consumption of natural resources and the development of societies more generally.28

In cases such as Sugarscape, a complex and intricate system with unpredictable shapes and behaviors emerges from what was once a relatively simple set for rules. While this type of emergence can be useful for understanding those initial conditions, it can also be used to analyze the process of development and change, in other words, its evolution. A good example of this is John Conway's infuential "Game of Life" simulation from 1970. Based directly on von Neumann's original concept of cellular automata, Conway's version sees pixels on a grid forming into selfsustaining entities that interact with each other. Examples of these entities include the "glider" and "glider gun," which seem to fy across the grid like birds. Conway's work fostered the idea that one could create virtual living organisms through relatively simple simulations, a discourse this book will address in more detail in Chap. 6. After this, researchers began testing to see if simulated agents could optimize their behavior through evolution and perhaps even learn. In 1975, John Holland laid out the fundamentals of genetic algorithms and learning in his book *Adaptation in Natural and Artifcial Systems.* Examples from his book included a robot that could learn the most effcient way to pick up cans through a process of trial and error learning.

Much like weather and fuid simulation, both agent-based simulation and genetic algorithms have been taken up and further developed by VFX and animation. Multiple Agent Simulation System in Virtual Environment (MASSIVE), developed by VFX industry veteran Stephen Regelous at Weta Digital, draws specifcally on technology and concepts from agentbased simulation.29 Weta developed MASSIVE as a way of animating hordes of moving and fghting creatures, which were too numerous to stage in real life. Past efforts to render large groups suffered from the appearance of patterns. It looks unnatural if every character behaves the same. If you vary characters amongst a fnite set of animated behaviors, uniform patterns still emerge, which spectators perceive as too regular and artifcial. MASSIVE produced a far more naturalistic effect by setting simple behavior rules for agents and running a dynamic simulation where they reacted to each other. Weta put MASSIVE to use in *The Lord of the Rings: The Fellowship of the Ring* (2001). Regelous has since left Weta Digital and formed his own independent company that now sells MASSIVE 2.0. MASSIVE has also spawned an engineering tool of its own and it markets its products to engineers and architects, not just as a visualization tool, but also as way to simulate the movement of people through a building.30

Genetic algorithms have similarly proven useful for animating characters. This technology was developed by a third-party software company called Natural Motion, founded by software engineer Colm Massey and researcher in evolutionary and adaptive system Torsten Reil. Natural Motion developed the concept of "stuntbots," which are simulated characters that are programmed to learn to stand upright and to react to external forces. They will attempt to steady or right themselves in response to getting knocked over of thrown. Natural Motion's Endorphin software has been used in a variety of Hollywood movies such as *Troy* (2004) and *The Lord of the Rings: Return of the King* (2003). Like MASSIVE, Endorphin solves the problem of how to naturalistically animate the movement of characters when it would be unfeasible to do it with keyframe animation or motion capture.31

Many of these technologies have been the subject of breathless promotional bluster. Therefore, one should exercise a certain degree of caution when studying the connections between VFX, animation, and scientifc research. One way a studio might promote the realism of their work is by promoting its connection to scientifc realism. A clear example of this can be found the promotion of the flm *Interstellar* (2014). During its release, there were numerous stories in the press about how the animation of the black hole in this flm was made with the aid of a simulation designed by prominent astrophysics Kip Thorne. It was "the most accurate simulation ever of what a black hole would look like," according to a story in *Wired*. 32 The fact that some VFX and animation tools use the same concepts used in other domains of science and technology does not tell us that they are realistic though. Instead, it tells us that they share a certain epistemic frame, a certain way of seeing the world and making sense of it. Nonlinear animations are fctionalized versions of scientifc simulations.

Over the past century stochastic and dynamic simulations have brought forth a different way of seeing the world. This has been a relatively gradual transition, but since the 1940s this way of understanding and controlling the unpredictability complex has spread from exotic science, to functional tools, to popular culture. Many of the examples of nonlinear animations like *The Wrath of Khan* or *The Fellowship of the Ring* are treated by the industry as landmarks in digital flmmaking. But there is another history at work here. These visual effects are not just examples of digital technology; they are examples of nonlinear simulation. We can look to these images as historical indexes of the role nonlinear simulation was beginning to play in our lives, whether we knew it or not.

Stochastic and Dynamic forms of nonlinear animation often share in the myths and cybernetic discourses of the research tools that preceded them, similar to the way Patrick Crogan sees a Cold War cybernetic discourse designed to predict the future as having infuenced the shape of video games.33 One such myth, which Philip Mirowski terms "cyborg science," confates the ontology of real phenomena with the computational simulations used to understand them.34 We can see this sort of thinking at work in examples like Conway's Game of Life, which seems to ignore the difference between the emergent behavior of virtual patterns and the immense unknown complexity of real life. Paul Edwards similarly critiques the discourse of "central command and control" at work in Cold War computing, where politicians and the military were seduced by the idea that they could attain totalizing control by computationally merging different source of information.35 Again, the way nonlinear animation tools create computational uncertainty only to further control it suggests a similar way of thinking, especially when these tools are used in place of recording images of real phenomena. Yet we should not reduce nonlinear animation tools to these Cold War discourses. Researchers and practitioners do not always confate the ontologies of simulation and reality, and they do not always seek totalizing control. Even military-driven R&D itself can be viewed in a more nuanced way.

#### R&D: Engineering and Science in Modernity

The nonlinear animation technologies developed for the flm industry and the nonlinear simulation technologies developed for various other scientifc and industrial applications both resulted from a very specifc historical, institutional, and epistemic context, defned by the concept of researcher and development. R&D is, in essence, a theory of how technology is created in modernity. This follows from a modern defnition of technology that frst emerged in the late nineteenth century in Germany in the form of the term "technik," meaning "industrial arts," and more generally the application of scientifc thought, both for the construction of artifces and for the proposes of organization or systemization.36 Rather than limiting engineering and design to existing knowledge, R&D attempts to marshal new knowledge, directing it toward technical application. This way of thinking was central to American Cold War policy, which sought new technologies that could provide strategic advantages, but its infuence extends well beyond this. The discourse of R&D continues to shape countless industries, including flm industries. The fact that contemporary flms feature screen credits such as "R&D," "R&D Artist," and "Principal Graphic Scientist" gives us some sense of the concept's infuence.

Up until very recently, analysis of animation and VFX since 1980 has focused on digital technology as the key subject of change: what happens to flm when it is reduced to ones and zeros? While this is a valid question, R&D points us to different theoretical questions and directs our attention to other historical factors.37 Rather than focusing on the telos of numerical discretization and calculation, we can instead look to the institutional conditions that produced technologies like computers and simulation. Applying the history of R&D to nonlinear animation additionally draws attention to the fact that these technologies were not external forces that had deterministic effects on flm production. Instead, it shows how the development of these tools was an internal process, shaped and facilitated by studios and software companies. This shifts agency from the abstraction of technology to the practical reality of institutional and industrial organization. When Raymond Williams makes the case, contra technological determinists like Norman McLaren, that society and culture determine the shape of media technologies, he highlights R&D as site where this process takes place.38 Television was not discovered as some technological concept external to society, and neither was nonlinear animation.

Implied in the words research and development is a relation between theoretical and practical knowledge, basic science and applied engineering, or "knowing that" and "knowing how."39 Although scientists and policy makers have critiqued R&D as anathema to the cause of basic science and freedom of inquiry, the concept opens new avenues for understanding how nonlinear simulation is used by scientists, engineers, and artists to make sense of the world. Simulations are artifces built to learn about the world. While scholars like Paul Edwards and Patrick Crogan argue that the roots of simulation can be found in Cold War governments' drive to anticipate the future in an uncertain and dynamic world, this more basic epistemic topic allows us to uncover an older genealogy. It also allows us to recognize a method for understanding the world that is not defned by its military application, even if the military played an important role in its history. The history of R&D therefore offers a more fulsome picture of the fctional and creative use of nonlinear simulation by artists and the flm industry.

Thomas Edison's Menlo Park research laboratory is a historically signifcant archetype for modern R&D in North America.40 This new form of institution brought scientists together to conduct research that was directed toward developing specifc technologies. It was made possible by changes in patent laws that saw employers retaining the rights to discoveries made by researchers.41 Following the Menlo Park paradigm, the drivers of American industry soon all had research labs. Not long after industry recognized the potential of R&D, the U.S. government sought to take advantage. In 1915, the Navy formed a science advisory committee, chaired by Edison. The following year the United States government formed the National Research Council with a board also populated by industry fgures.42 The marriage of science and engineering was now recognized as a matter of national importance. This was also the period when educational institutions such as MIT and Caltech, which merged science and engineering, began to fourish with the help of government funding for R&D oriented projects.43 Funding for R&D would one day raise technology-oriented educational institutions such as these to the point where they rivaled the old elite universities, some of which resisted the integration of engineering and technical training.44 "Research and development" took hold as a common term to describe this logic in the early 1940's, with the formation of the United States OSRD (Offce of Scientifc Research and Development).45

The concept of scientifc simulation is deeply connected to the logic of R&D. Even before computer simulation, R&D institutions funded considerable work on material simulations. For example, NACA (National Advisory Committee for Aeronautics), a contemporary to Edison's Navy Committee and predecessor to NASA, made simulation a key component of its mission via the wind tunnel. NACA-sponsored researchers at Stanford made important early advances in propeller design through wind tunnel work. The construction of new wind tunnels such as the VDT (Variable Density Tunnel) in 1922 and the FST (Full Scale Tunnel) in 1931 lead to signifcant discoveries that put the United States at the forefront of aeronautic research. The wind tunnel was a tool for physical simulation. It created artifcial conditions that were meant to mimic real world conditions. It offered the opportunity to better understand the dynamic properties of air in motion, but it also allowed the practical testing of aerodynamic designs. Computer simulation would be developed for the same uses. Some of the examples of nonlinear simulations in the preceding section serve a similar testing function. Indeed, technologies such as fuid simulation have effectively replaced the wind tunnel's role in R&D, making these expensive physical facilities far less common than they used to be. Nonlinear animation is a product of this history and understanding the stakes of these issues will help us to understand changes in flm industry and technology.

Institutions like NACA and the OSRD demonstrate that R&D took shape at a nexus between military and industrial interests, and that it represented a merger between academic science and engineering. As historian Stuart Leslie puts it, this confguration "blurred distinctions between theory and practice, science and engineering, civilian and military."46 As early as 1945 infuential research policy maker Vannevar Bush expressed concern over how military R&D directives were transforming science and limiting scientifc "freedom of inquiry." Bush published two important texts at the end of the Second World War on this topic: "Science: The Endless Frontier," a whitepaper addressed to President Roosevelt, and "As We May Think," an article published in *The Atlantic*. 47 Bush had been responsible for administering funding for scientifc research during the war. He led NACA and directed government funding during the Second World War as head of the OSRD, the forerunner of ARPA (Advanced Research Projects Agency) and the NSF (National Science Foundation). He was also an active cyberneticist and the founder of Raytheon, a major military R&D company. Yet in these papers, he calls for an end to the "intermediate" science that was being done during the war and a return to basic research.

While R&D raises the issue of how scientifc research is infuenced and directed toward certain negative ends, many philosophers of science have cautioned against interpreting engineering and technology in negative terms as contrasted with a purifed, idealized defnition of science.48 The concept of simulation plays a key role in this philosophy. In an infuential essay from 1966, Mario Bunge argues that "technology is as theory laden as science" and that there should be a distinction made between "substantive theories," such as the principles of airfow, and "operative theory" such as how airplanes are designed and how airports are organized.49 These new forms of inquiry produce new forms of knowledge. Many historians and philosophers would investigate this issue further, giving greater consideration to the many forms of knowledge produced by engineering and technology.50

Aeronautical engineer and historian Walter Vincenti similarly questions why we construct science as the site of knowledge and engineering as the mere application of that knowledge. As a corrective, he seeks to theorize the type of knowledge produced through engineering. Using examples from his feld of aeronautics, he argues in his work that engineering produces empirical knowledge through the testing of designs.51 For example, testing a new wing design in a wind tunnel is a kind of experiment that produces knowledge. Although Vincenti and Bunge are not directly discussing simulation but rather engineering and technology more broadly, it is worth noting that their key examples do involve simulation.

Herbert Simon makes this connection explicit, arguing that computer simulation is in essence a form of engineering epistemology: it understands the world through the design and testing of models**.** 52 You make a model (material or digital) based on a theory and you see what will happen under certain conditions. Simulation provides a form of knowledge that is not exactly empirical (it does not come from actual events), yet it is not entirely theoretical either. The R&D paradigm, which was born of an industrial and governmental desire for technological advance, thus produced a new form of knowledge. While we should remain critical of the way R&D is often employed for militaristic or politically suspect ends, this is not a reason to ignore the theoretical complexity of this way of thinking. This is a new form of knowledge we can look for in digital VFX and animation, especially in examples of nonlinear animation, and we should seek to understand it. Indeed, while digital technology has received the majority of attention as an agent of change in the past few decades of flm history, the role of R&D, and R&D institutions like the feld of computer science, have themselves been important agents of change.

#### Computer Science and R&D

As a research discipline, computer science is perhaps the most paradigmatic example of the institutional logic of R&D. It is a feld where making things and doing research are one in the same. While other scientists try to understand the physics of weather patterns or the behavior patterns of people, computer scientists study an artifce, a made thing. The institutional context of this new discipline embraced and expanded the epistemic logic of simulation as the "science of the artifcial." It also became the site where research for new media industry tools like nonlinear animation took place. Computer science organizations like the Association for Computing Machinery (ACM) and its special interest group SIGGRAPH have become imbricated with media industries since 1980, particularly Hollywood, as the following chapter will explain in detail. Thus, the rise of the logic of R&D in flm industries like animation and VFX offers a different context for changes in the past few decades. Rather than looking for the effects of digital technology on flmmaking, one can see how R&D produced the conditions for those technologies to be developed and used.

There are two possible defnitions for computer science that posit two different starting points. On the one hand, one might include all research that was in retrospect conceptually relevant to the computer avant la lettre. Here we would include things such as Alan Turing's theoretical work in the 1930s, Lord Kelvin, Ada Lovelace, and Charles Babbage's work on a differential analysis machine, and so forth. Lev Manovich's genealogy of new media, for example, privileges Babbage as the starting point of computers.53 The second defnition of computer science would instead be limited to the institutional formation of computer science departments in universities. This later institutional defnition prompts us to consider the external conditions that shaped the discipline and reveals how intimately linked computers are to the logic of R&D. Rather than interpreting nonlinear animation (or computer graphics in general) as extending from the fundamental properties of numerical computation laid out by the likes of Babbage, we can instead look to the institutional and epistemic frame of R&D.

Computer science was not necessarily destined to be an R&D discipline. In its earliest days there were many who imagined it as part of more traditional academic pursuits. A good example of this is early British computer science. In the British context, the computer was seen as a theoretical and philosophical tool. Nineteenth century experiments with differential analysis machines in Britain had very different goals than the computer science research conducted by people like Vannevar Bush in the US in the 1930s and 1940s. Mark Bowles observes these to be cultural differences; while the "technological style" of American computer science was one of engineering and optimism, the British approach was mathematical and theoretical.54 While Bowles' cultural observations are intriguing, there are also important historical reasons for these differences. The context that produced the American version of computer science was a military-industrial-academic complex that saw research funds earmarked for things that might yield geopolitical strategic advantage. As Paul Ceruzzi argues, the U.S. struck a balance between state and private involvement in academics, unlike Europe or the USSR.55

Though the idea that only the U.S. could have invented the programmable electronic computer sounds like a step too far toward exceptionalism, it is true that different cultures sought to make use of the same concepts for different purposes. Different contexts led to different kinds of computer sciences. The propose of American computers science, the version of computer science that came to dominate globally as a paradigm, is oriented toward developing new technologies that might beneft industrial and national interests.

The frst reprogrammable electronic computers in the United States demonstrate the logic of R&D and the cooperative unions formed between government, industry, and research institutions. The ENIAC was designed and constructed for the United States Ballistic Research Laboratory by two members of the University of Pennsylvania's Moore School of Electrical Engineering, physicist John Mauchly and electrical engineer J. Presper Eckert. Mauchly and Eckert decided to go into business for themselves after the ENIAC, selling their second computer, the UNIVAC (Universal Automatic Computer), commercially. Yet their frst customers were the United States Census Bureau and Northrop Aviation, a major Air Force contractor.56

Once the computer became a product, demand for trained professionals to design and maintain systems grew. Products like the IBM 650 were sold to universities if, and only if, those universities agreed to teach a course in computer science.57 The fact that a private company would offer a special deal to universities demonstrates the synergistic logic of R&D. IBM had an interest in increased academic use of computers for several reasons. First, researchers might discover new uses for the computer, thus leading to future products. Second, IBM ensured that future workers who would go on to jobs in government and industry would be familiar with their equipment. Third, IBM had an interest in hiring new researcher and engineers and developing relationships with universities ensured they would have access to the best minds of the future. This logic continues in the relationship between VFX and animation studios and research institutions.

There was, however, resistance to the R&D nature of computer science in some of the more prominent and established universities in the United States. The idea that technicians, people who build things, would be rubbing shoulders with professors was objectionable to traditionalists.58 Harvard, for example, was resistant to the inclusion of any kind of engineering feld at frst.59 They imagined themselves as educating the leaders of tomorrow, not technical workers. Although the University of Pennsylvanian's Moore School was pivotal in inventing the programmable computer, when computer science frst started to take shape as an academic discipline the school chose to outsource the operation of the actual devices.60 Building technical things was not part of the traditional liberal arts education.

Rather than being a discipline that advanced the design of ever bigger and better technologies in congress with industrial and governmental interests, universities such as these imagined computer science more as an extension of mathematics, much the way it was imagined in the British context. This approach protected this new discipline from undue direction and shaping.61 If computer science is theoretical and largely useless to military and industry, then it is free to explore the potentiality of computation in any direction it sees ft, without infuence from directed funding: computer science as basic science. As computer scientist Michael R. Fellows argued at the time, "computer science is not about machines, in the same way that astronomy is not about telescopes." The engineers responsible for building and running computers were mere service people, hardly the contemporaries of the mathematical researchers. These criticisms have endured in the form of philosophical discussions about the epistemic role of computer science. Is it a science? Is it bad science?62 The role of R&D in computer science today is beyond question though. To today's computer scientists, this resistance sounds like pure elitism and the result of stodgy out of touch professors. The merger of knowing and knowing how, of science and engineering, of research and development, was irresistible in the end. Furthermore, as Simon, Bunge, and Vincenti argue, there is no reason to privilege science as the only source of knowledge. We learn a great deal by making artifces, especially simulations.

Computer science and R&D created the institutional context that gave rise to the many different uses of nonlinear computer simulation, from fnancial mathematics to management science to the nonlinear animation tools employed in Hollywood blockbusters. This context suggests that nonlinear animation offers a vision of power and control much like management science or fnance. While stochastics and dynamics seem to introduce a little bit of anarchy and chaos into calculations, they create chaos in order to contain it. The following chapters will further study nonlinear animation in this context. But limiting study of this subject to only this one angle misses out on some of the epistemic nuance of nonlinear animation. Just as cinema was a paradigmatic product of the episteme of industrial modernity, it was also a medium that gave rise to a range of different visions. Nonlinear animation should be approached as holistically as this.

#### Speculative Simulation

Though they share a great deal with tools used by managers and investors, nonlinear animation tools open potentialities for fctional, imaginative, and speculative use. Some work in this direction has already been done in the feld of games studies. The initial push to defne game studies as a feld discrete from flm or literature studies, the so-called ludology versus narratology movement, centered on the concept of games as a simulationbased medium that should be interpreted based on the rules and causal structures programmed into them. In his introduction to the very frst issue of the online academic journal *Game Studies,* Espen Aarseth writes that the concept of simulation is "crucial" to ludology as a "radically different alternative to narratives."63 Gonzalo Frasca, another foundational ludologist, treats games and simulations as virtually synonymous. In several essays, he insists on the difference between simulation, which models the mechanical function of systems, and "representation," which he associates with painting and flm.64 The feld of game studies has since made peace with narrative and visual analysis, and expanded into numerous other methodologies including ethnographic study of players.65 But these early ludology concepts continue to be important to the feld, and they provide a starting point to begin to understand the way nonlinear animations make meaning as simulations.

Without specifcally naming nonlinearity, Aarseth and Frasca both note the unpredictable outcomes of games and the dynamics of play as defning qualities. Aarseth writes that "The complex nature of simulations is such that a result can't be predicted beforehand; it can vary greatly depending on the player's luck, skill and creativity."66 Frasca discusses games as "dynamic systems" with unpredictable outcomes.67 This is key to the way they differentiate games form linear narrative forms. Narrative media tell a story while games see players participate in the writing of a story. The essence of ludology is the writing of rules that govern the mechanisms of unpredictable and dynamic play.

As I noted earlier in this chapter, nonlinearity became a key part of gaming in the Prussian war games of the 1800s. The desire to quantify every aspect of war lead to the idea of using random dice rolls to simulate the uncertainty of the battlefeld. These simulations combined the dynamics of a chess game with the naked stochastic randomness of games of chance and used both to model the mechanisms of real-world events, both in order to better understand them and to anticipate the future. Although early ludologists like Frasca emphasize the transformative effects of the computer, many scholars have since noted the long durée of this genealogy. Jon Peterson notes the connection between war games and tabletop role-playing games like *Dungeons & Dragons*, and William Uricchio notes these features at work in computer games with large historical sweeps like Sid Meier's *Civilization* series. Here again the concept of nonlinear simulation is key. Uricchio describes how these historical games set conditions, like a computer simulation of some past event, but the unpredictable unfolding of gameplay leads to different outcomes.68 So, for example, a player might start with the same historical conditions as the Roman Empire, but history might unfold in a completely different way. Uricchio argues that these games open up master narratives of history and focus our imagination on the possibilities of a contingent unfolding history.

Clearly, VFX and animation differ from games in some signifcant ways. Those early ludologists would certainly shudder at the idea of the two being described together. Nonlinear animations for flm will eventually create a single image that is the same every time it is played back. Thus, they do not create the open-ended user experiences theorized by game studies. Yet both nonlinear animation and games are premised on the concept of building models as a way of representing the world. Broader conceptualizations of fctional simulation provide a possible theoretical framework for thinking about nonlinear animation as well as other fctional forms of simulation like games together.

Animation scholars have already noted the importance of recognizing the connection between animation and games. Almost all games are graphic in nature, and thus rely on animated sequences of images. As Chris Pallant argues, Johan Huizinga's concept of the "magic circle," so valued by ludologists for the way it theorizes "temporary worlds," applies readily to animation and other forms of visual media and performance.69 Focusing on simulation and experimentation uncovers yet more common ground between the two.

Gertrud Koch argues that animation is "isomorphic" with scientifc experimentation, in the sense that they both work at the threshold of our understanding of the world and invent theories for what is beyond.70 In other words, animation and the experiment both speculate about reality in an iterative and contingent ongoing process. Nonlinear animation is a paradigmatic case of this common ground between animation and experimental science. To build a simulation is to attempt a new way of understanding the world. A simulation is but one attempt to model the mechanism behind some real phenomenon. It is speculative. Indeed, simulation has proven an unlikely ally in developments in speculative materialist ontologies concerned with the "mind independence" of reality.

How exactly do we *think* about *mind* independence? What is philosophy without humans or thought? Manuel DeLanda offers an answer to this question that utilizes the concept of simulation. DeLanda's primary initiative is to interpret the ontology of Gilles Deleuze in a realist context, articulating his own version of process ontology. Focusing only on his work *Difference and Repetition*, he argues that Deleuze, "grants reality full autonomy from the human mind, disregarding the difference between the observable and the unobservable, and the anthropocentrism this distinction implies."71 DeLanda takes examples of nonlinear sciences and argues for their compatibility with a process ontology that sees things becoming actual within a space of possibility. This is an effort to formulate a realist position without locking things down into naïve scientifc realism. He is highly critical of scientifc positivists who only believe in the mindindependence of things that can be schematized within their established laws.

The key ontological gesture of this approach is that it allows us to see anything in the world as having been composed of an assemblage of interacting factors. DeLanda uses the example of a storm to illustrate this point. In a way it is obvious that a storm is composed of an assemblage of factors, it is an event that emerges as a result from things like temperature, airfow, and moisture, but his larger argument is that all things in the world are in fact ontologically the same as a storm.72 Animals and even rocks are all material things that came-to-be as the result of an assemblage of constantly changing contingent factors. Simulation therefore confronts us with the indeterminacy of reality and the impossibility of schematizing it using stable laws. It demonstrates reality's continued capacity to surprise us, to assert its autonomy. His approach undermines our ability to schematize reality by emphasizing contingency and the singularity of every individual thing or event.

So how could we ever mediate the world in this way? How could we represent reality as the result of a non-deterministic process of becoming that is autonomous from human perception and understanding? The answer DeLanda offers is computer simulation. In his book *Philosophy and Simulation: The Emergence of Synthetic Reason* he runs through a variety of examples of how simulation can be useful as a tool for realist philosophy. He argues that simulation allows us to conceive of things, not merely in terms of their properties but also in terms of the virtual qualities of their tendencies and capacities. Simulation defnes things in terms of what they may become. Simulation does "justice to the creative powers of matter and energy." It is a way to explore the "structure of the space of possibility."73

In order to imagine a speculative realist philosophy DeLanda effectively merges mediation and philosophy; he makes thought "synthetic." This sounds contradictory at frst. Mediation is human after all. A medium is what sits between the world and us. Mark Hansen and W.J.T. Mitchell note the importance of the human in theorizing media, writing, "Before it becomes available to designate any technically specifc form of mediation, linked to a concrete medium, media names an ontological condition of humanization – the constitutive operation of exteriorization and invention."74 But this defnition of media is not as far from DeLanda's approach as it may seem at frst. He is describing a "humanization" through invention. Simulations are a sort of translator. Simulations can think without us, yet they are also ultimately our "inventions."

There is good cause to be a little skeptical of DeLanda's use of simulation. A few critics have noted how uncritical he is of simulation. Matthijs Kouw writes in a review of DeLanda's book that simulation has more explanatory power for him than it does even for the sciences.75 Could it be that he believes the virtual character of computer simulation is an effective homolog for his process ontology? Is he reducing reality to mere computation, just as some Cold War cyberneticists did?

There is also the issue of neglecting the way simulations often entail very specifc ways of seeing the world. DeLanda's uses examples from the R&D history of simulation in his book. He discusses cellular automata and Conway's Game of Life, two examples of nonlinear simulation covered earlier in this chapter. Simulations were used in the making of nuclear bombs. They are used by management scientists and fnancial mathematicians to extract as much capital as possible. Simulations also have an appealing way of excluding anything you would like to exclude: a new way to sanitize the messiness of history and to manufacture epistemic authority.

Yet DeLanda is advocating for a speculative disposition. Any attempt to use simulation would be but an anecdote, an experiment, a fctionalization that could at best glimpse some aspect of the character of reality. More than anything, it confronts us with the limits of our understanding. Isabelle Stengers writes, "Computer simulations not only propose an advent of the fctional use of mathematics, they subvert equally the hierarchy between the purifed phenomenon, (responding to the ideal intelligibility invented by the experimental representation), and anecdotal complications."76 Simulations, in other words, offer us unlooked-for things that might confound our understanding of the world. Simulation can exceed the settled, restrictive epistemology that the arts generally attribute to science and technology. If cinema can be thought of as an apparatus tied to a specifc technical disposition regarding time, perspective and so forth, the modelbuilding activity of simulation presents the opportunity to rebuild the apparatus every time you use it. It is never tied to a specifc way of seeing and the ideology that might entail.

This theoretical approach to fctional uses of simulation could be applied to nonlinear animation, games, or other forms of generative computational art. The contingency of nonlinearity represents the threshold of our understanding, past this point reality is beyond our control and beyond our ability to predict. Thinking about engineering as part of flm production, focusing on "knowing how," allows us to consider these possibilities. Yet, at the same time, the history of nonlinear simulation seems confgured toward developing ways to control and contain unpredictable contingency, and thus to tame or compromise it. The R&D histories of computer science and nonlinear simulation show that these technologies developed within institutional and economic contexts that directed them toward certain uses. Understanding nonlinear animation means considering both of these conficting sides. There is much more going on here that a defnition of simulation as mere artifcial fakery or a symptom of postmodernity accounts for.

#### Notes


edition (Baltimore: Johns Hopkins University Press, 1993); Anthonie Meijers et al., eds., *Philosophy of Technology and Engineering Sciences* (Elsevier, 2009).


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

### Hollywood's R&D Complex

In the 2019 animated feature *Spider-Man: Into the Spider-Verse,* heroes Miles Morales and Peter B. Parker are trying to steal a piece of technology from an evil mega-corporation called Alchemax. To do this they must infltrate the company's research lab. Rather than looking like the classic super villain's lab, Alchemax looks more like a contemporary tech company or digital animation studio. Indeed, it bears more than a passing resemblance to promotional photographs of Sony Pictures Animation's headquarters in Vancouver B.C., where the flm was made. The lab is full of glass walls, foor-to-ceiling windows, and gleaming white surfaces. People work in large open workspaces, they ride bicycles to work, they sit on yoga balls at their desks, and they have a luxurious cafeteria. As the two heroes try to abscond with a computer, the villain Dr. Octavius (a head scientist at Alchemax) blocks their way. "Could you give that back to me, young man?" she asks in a polite voice, before thundering menacingly "It's proprietary!" The reference to proprietary technology is something of an inside joke. While it means little to the average audience member, it rings true for anyone familiar with the economic, organizational, and discursive logic of large animation and visual effects studios. Studios like Sony Pictures Animation invest considerable resources into developing proprietary technology, and they protect that intellectual property (IP) through a variety of methods including non-disclosure agreements, patents, litigation, and security. This scene offers a satirical demonstration of how fundamental R&D has become to the business of animation and VFX.

Cinema has always been a feld of constant technological change, and animation studios like Disney have historically been agents for technological change.1 Yet the past thirty years have seen a shift in cinema's relation to technological development under the logic of R&D. This is a logic that sees media companies like Walt Disney operating research institutions like Disney Research, a private R&D lab that funds postdoctoral and tenuretrack researcher's work on subjects from acrobatic robots, to performance capture, to software that can estimate children's physical characteristics based on their voices (really).2 This entanglement of Hollywood and technological development frst started to take shape in the 1980s, as the U.S. government began to replace Cold War federal research funding with tax-break incentives for private research, and it has intensifed since.

Thinking about animation and VFX as both culture industries and technology industries transforms our understanding of how they function. Since the bankruptcy of Rhythm & Hues, a studio that had been receiving public recognition for their work in *The Life of Pi* (2012), the instability of the VFX industry has been the subject of discussion by industry press, workers, and scholars. While many have rightly pointed to the internationalization of labor and the competitive bidding system in VFX,3 getting to the bottom of this economic instability requires understanding the substantial upfront costs and risks of developing new technologies.

Nonlinear animation offers the clearest case of how Hollywood became involved in supporting R&D. The flm industry did not simply pluck animation technologies like nonlinear animation ready-made from the feld of computer graphics. Instead, it supported research on nonlinear simulation, taking over for Cold War sources of support, drawing in researchers from other felds, and supporting research labs in universities. As audiences were starting to see early nonlinear animation in flms like *Star Trek II: The Wrath of Khan* (1982), an extensive institutional and industrial reconfguration was taking place behind the scenes. Since 1980 Hollywood has played an ever-expanding role in supporting the development of computer graphics technologies like nonlinear animation. It has become such an effective engine for technology development that "Hollywood software" is now used in a variety of other felds, from architecture to geophysics.

As Chap. 2 established, in order to understand the dramatic changes cinema has undergone in the past few decades, we need to look not only to the nature of the digital but to the institutional context of R&D. Seen from this angle, the flm industry was not transformed by the appearance of some external technology, but instead it underwent institutional, economic, and discursive changes that resulted in new technologies. Hollywood R&D shaped many of the technologies that are supposed to have transformed cinema so dramatically since 1980. This chapter studies in detail exactly how this process works and how it came about.

This emphasis on the role of technology in VFX and animation might sound somewhat uncritical to readers who are familiar with the promotional bluster of these studios, which often promote their technological advances through a Silicon Valley-styled rhetoric of innovation. But a clear picture of the role of R&D in these industries undoes many of these promotional myths. In particular, this chapter counters the neoliberal myth of entrepreneurialism as the innovative antidote to ossifed bureaucracy. Far from being mavericks working in isolation, studios tapped into Cold War research infrastructures and built new relationships with public institutions. R&D has also proven to be an effective tool for economic hegemony, keeping strategically valuable technology in the hands of the studios with the biggest budgets.

#### Complexes, Military-industrial and Otherwise

Hollywood's relationship to R&D in the past few decades is that of a *complex*. This is a term that requires some historical context. According to the OED, a complex is "a whole comprehending in its compass a number of parts, esp. (in later use) of interconnected parts or involved particulars; a complex or complicated whole."4 In psychoanalytic terms, a complex is a collection of unconscious thoughts grouped together around a specifc subject, as in an "inferiority complex." One might also refer to a group of buildings sharing a common space as a complex. This concept of parts becoming a whole through their interrelation gained new meaning and political signifcance in the 1961 when President Dwight D. Eisenhower famously warned against the growth of a "military-industrial complex" in a speech at the end of his presidency.5 Eisenhower spoke from a moment in American history that saw military spending continue to rise after the immense mobilization of the World War II, long outliving its practical utility. The war led to the creation of an organizational entity, a whole made of interrelated parts, a complex, yet one that was self-sustaining, even as the conditions that created it changed.

The military-industrial complex saw a co-dependent entanglement form between private industry, the government, and the military. Political science scholars describe this phenomenon as an "iron triangle," where policies made by Congress support bureaucratic institutions that beneft private interest groups, which in turn support congressional representatives.6 Congressional committees allot funding for military programs that beneft defense contractors, who in turn support members of congress with campaign funds and by creating jobs in their constituencies. The term military-industrial complex thus does not only describe the emergence of a new institutional-economic collection of parts, it also describes a self-sustaining entity with its own internal logic. Complexes are sets of relations that endure because they are self-sustaining. In his *Marxist Theory of Bureaucracy,* Ernst Mandel describes the military-industrial complex as "a near-perfect feedback mechanism of self-expansion."7 A complex such as this took shape in Hollywood at the end of the Cold War. It was not a complex of military hardware procurement but one of R&D.

The military-industrial complex supported R&D extensively through the World War II and after. Eisenhower recognized the effect the militaryindustrial complex was having on research. He observes in his speech, "a government contract becomes, virtually, a substitute for intellectual curiosity." Indeed, historian Douglas Brinkley claims the original subject of Eisenhower's speech was supposed to be the "military-industrial-scientifc complex."8 Three years after Eisenhower's speech, Sen. J. William Fulbright offered a more focused critique of the effects of the militaryindustrial complex on scholarship. He worried that the amount of funding directed toward supporting technological advances for the military was creating a "distortion of scholarship."9 Fulbright was responding to trends in research and education that seemed to be intensifying rather than dissipating after the end of the World War II. In the 1950s, the Department of Defense accounted for 80% of federal research spending, which was higher than any time during the World War II.10 In 1964, research spending accounted for 25 % of total federal discretionary spending.11 The government earmarked this outsized spending for R&D that might offer national strategic value. In his history of how defense spending shaped American technical universities like MIT and Cal Tech, Stuart Leslie uses the term "golden triangle" to describe this military-academic-industrial complex.

Part of what made Eisenhower's original speech so striking was the suggestion that the logic of the military-industrial complex infuences the very fber of the nation. Its effects were "political, economic, even spiritual." The complex could make the nation more warlike or less free. This applied to R&D in particular. Eisenhower and Fulbright feared that the complex was infuencing the products of research, producing technologies of death and misery instead of doing research that might help humanity understand itself and the world.

The military-industrial complex endures to this day. The government continues to buy M1 Abrams tanks despite the fact that the U.S. Army does not want them. And reasons to use military hardware, wars cold, hot, proxy, drug, or otherwise, have certainly been numerous since 1945. Yet much has also changed. Federal funding for research, for one, has changed dramatically since the 1960s. After the end of the Cold War, in the context of President Reagan's neoliberalism, the U.S. government's approach switched from directly funding R&D to providing tax credits for private research. While the total amount of R&D funding has continuously risen since the 1950s (except for a short period of stagnation during the 1990s recession), starting in 1985 federal funding began to level off, failing to keep pace with economic growth, while private industrial funding more than made up for its absence (Fig. 3.1).12

This decline in earmarked federal funds promised an end to complexes, especially for neoliberal free market apologists. It promised to "starve the

**Fig. 3.1** R&D spending in the United States in billions of constant 1996 dollars

beast" rather than feed it, to use Reagan's term. Without the distorting effects of a complex, technology and the nation's "spirit" would fnally be free or, to use Karl Popper's famous term, "open." Silicon Valley embodies this promise. These tech companies are supposed to be delivering innovation for economic growth through entrepreneurial autonomy: research directed not by the hand of the government but the invisible hand of the market. This is a discourse that has shaped the animation and VFX industries in America since the 1980s. Pixar co-founder Ed Catmull describes George Lucas' Skywalker Ranch production facilities (where he once worked) as being halfway between Silicon Valley and Hollywood, both in terms of travel time and metaphorically.13 Pixar is itself a paradigmatic Silicon Valley tech company.14 Yet this discourse of free and open technology and of a technology-driven industry that does not rely on the state is not borne out by history.

For one, this discourse elides the tech industries' reliance on training and research that public universities and remaining federal funding continue to provide. Economist Mariana Mazzucato has extensively documented how, for example, companies like Apple rely on government R&D and public research institutions.15 It also neglects the fact the Silicon Valley was a clear product of the Cold War R&D complex, and that letting markets shape the course of research in place of the government does not necessarily produce better outcomes for humanity. While neoliberal fantasies like Silicon Valley's technological utopianism seem to promise a totally open feld of R&D unencumbered by "distortions of scholarship" and governmental interference, new complexes have proliferated. Industries have begun to take on the directive role the military used to play, recruiting researchers and institutions, both private and public, into developing technologies for their specifc commercial interests. Hollywood's R&D complex offers examples of all of this. It grew from institutions and organizations established by the military-industrial-academic complex, it built signifcant ties with public and private research universities, and the demands of the industry shape the technological products of all of this research. The use of nonlinear simulation for animation is a clear example of this.

Cinema and other media have a long history with the military. Some recent work on this subject offers examples of media technologies being developed between the military and media industries. Haidee Wasson and Lee Grieveson note that technologies like radio and cinema had important strategic and technical utility beyond their role as entertainment media.16

Rebecca Prime's recent work chronicles how a former special effects technician from Paramount developed wide-screen technology for training aircraft gunners, which he then marketed back to Hollywood as Cinerama. Prime also notes the intimate relationship between this new projection format and the aesthetics of aerial photography, and how this spectacular combination played a role in the imperialistic soft power of the US Information Agency.17 Hollywood's R&D complex sees this transactional relationship between Hollywood and the military continue over the 1980s and 1990s, but gradually the locus of technological development changed from the military to Hollywood. Now Hollywood develops technologies for other research felds and industries to use.

Since the 1980s, new research labs oriented toward media industries applications have begun to appear in academic computer science departments like Stanford and the University of Toronto. In the case of nonlinear animation, as a following section will demonstrate, many scholars and technologies migrated from a military institutional environment to animation and VFX studios and software companies. The desire for new nonlinear animation technology became so strong that it encouraged the recruitment of researchers from other felds such as aerospace and geophysics. While the stakes may be lower than the militarization of the country, Hollywood's R&D complex raises the same issues Eisenhower, Fulbright, and Leslie raise. How has the "spirit" of computer graphics research been transformed? What new epistemic paradigms are at work here? The stakes of these questions are not only about scholarly freedom and the shape of scientifc knowledge, but also about the transactional relationship between media industries and technological change.

#### ACM SIGGRAPH

One of the best places to observe Hollywood's expanded role in R&D year-over-year is the Association for Computing Machinery's Special Interest Group on Graphics and Interactive Techniques (ACM SIGGRAPH), and its annual conference of the same name. SIGGRAPH has been the most important computer graphics research organization since the 1970s, shaping the direction of the feld and the technologies it produces. At its peak in 1997, SIGGRAPH had 48,000 members worldwide, and it continues to be a dominant (and now international) force in computer graphics research. Researchers sometimes assess the value of computer graphics research is in terms of how "siggraphable" it is.18

Conferences and professional associations provide particularly effective contact zones for the overlap between academia, government, industry, and media. These are the places where institutions and businesses mobilize scientifc research resources for specifc technological applications. They can also demonstrate how a given media technology took shape over time as a result of infrastructural, institutional, economic and political forces. These organizations' meetings and communications have a logic and culture that stem from these forces. As Raymond Williams notes, R&D is a site where we can look for the way "social needs, purposes and practices" shape media technologies.19 Study of SIGGRAPH's publications reveals which institutions and businesses support research over time, what sort of research is being done, and how researchers move between businesses and public institutions.

It should come as no surprise that the military-industrial-academic complex heavily sponsored SIGGRAPH in its early days. But starting in 1980 the type of research being done, the institutions sponsoring it, and even the character of the images circulating at the conference began to change. During this period, media industries (especially Hollywood and its blockbusters) became a vital force shaping research felds such as nonlinear animation at SIGGRAPH. This history demonstrates that Hollywood was not disrupted by some external technologies developed for other uses; rather that it played an important role in shaping the development of computer graphics technologies.

There is a tendency in histories of computer graphics to focus on key ideas that crystalized the feld and inspired further research. The two most common examples are Ivan Sutherland's 1963 demo of his project Sketchpad and Douglas Engelbart's 1968 demo at the Joint Computer Conference, also known as the "mother of all demos." These histories particularly single out Engelbart's Advanced Research Projects Agency (ARPA)-funded project as having demonstrated a wide range of graphic and interactive functionality that defnes much of the modern computer to this day: from the computer mouse to Google Docs. Ideas do not change history on their own though. These researchers needed institutional support and a means of disseminating their ideas. The Joint Computer Conference that hosted Douglas Engelbart's 1968 demo, for example, was made possible by coordination between the two key computer science research organizations, the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). Without the Joint Computer Conference or the SIGGRAPH publications and conferences that followed, without university computer science departments, and without military-fuelled research funding from the government, computer graphics would have likely developed in a different way. And the course of computer graphics research has been directed by prevailing institutional, political, and economic conditions ever since.

SIGGRAPH started as a newsletter in 1967, founded by Andy Van Dam, a professor at Brown University, and Sam Matsa, a researcher who worked for companies like IBM and General Motors. Their newsletter was geared toward computer science researchers who were interested in the visual and interactive potential of computers, many of whom were inspired by Sutherland's work. In 1974 SIGGRAPH became an annual conference that was the central hub of both academic and industrial computer graphics research. At this point it was still heavily sponsored by the militaryindustrial- academic complex. And although SIGGRAPH attracted interest from various industries, for the frst thirteen years the flm industry was utterly disinterest and uninvolved. According to Ed Catmull, Disney had no interest in computer graphics when the University of Utah sent him as a graduate student to propose an exchange program. Instead, they offered him a job as a theme park imagineer.20 Years later, when he was funded by the New York Institute of Technology's (NYIT) Computer Graphics Lab, Catmull tried to fnd a flm studio that might be interested in opening a computer graphics research department. Once again, he was rejected.21 Even as he and his colleagues at NYIT were working toward making an animated 3D feature, they could not attract the interest of Hollywood.

The world of computer graphics was, conversely, extremely interested in getting more involved in media industries. Even in the early days of computer graphics there was a great deal of interest in exploring the artistic potential of these new tools. Not only were there numerous artists and engineers attracted to this potential, such as pioneering experimental digital animators and artists John Whitney, Charles Csuri and David Em, but so too were the key facilitators of R&D. In 1966 John Whitney was the frst person to be awarded the position of artist in residence at IBM. In 1967 Bell Labs founded its Experiments in Art and Technology (EAT) program, which facilitated joint projects between artists and engineers. NYIT employed several researchers in an unsuccessful bid to make the frst fully 3D animated feature, *The Works*. Xerox PARC employed David Em in 1975 to explore the potentialities of interactive graphic software they were developing called SuperPaint, and he went on to be artist in residence at NASA's Jet Propulsion Laboratory at the California Institution of Technology from 1977 to 1984.

While it seems clear that a general zeal for the transformative potential of computers in part propelled this sort of research, it is also explicable through the speculative logic of R&D. Companies like Xerox and IBM were hoping to develop software that might be widely adopted by various media industries for image making. What they lacked though, was actual coordination with such an industry. They needed a market for their tools and computer graphics needed an audience. Without an industry like Hollywood, they were a hammer in search of a nail.

In 1979, someone in Hollywood fnally took an interest. Lucas flm started hiring computer graphics researchers, including Ed Camull and SuperPaint researcher Alvy Ray Smith. The effects of this new industrial infuence were immediately evident at SIGGRAPH's annual conference. In 1980, Catmull and Smith published their frst paper under the institutional affliation of Lucasflm at SIGGRAPH.22 In the following years, VFX and animation-oriented studios and software companies sponsored papers at SIGGRAPH with gradually increasing frequency as well. Early examples include a paper by Canadian computer graphics researcher William T. Reeves, affliated with Lucasflm in 1983,23 a paper on fuid simulation by computer science researchers Larry Yaeger, Robert Myers and Craig Upson, affliated with Digital Productions and Poseidon Research in 1986,24 another paper by Reeves from the same year, now affliated with Pixar,25 and further work in the following years supported by Pacifc Data Images (PDI).26 Contributions from other VFX and animation companies would pick up more in the 1990s as computer graphics began to proliferate in Hollywood and other media industries. VFX studios like ILM, Rhythm and Hues, and Digital Domain, large animation studios like Dreamworks and Pixar, and software companies that serviced these studios like Softimage and Alias| Wavefront, all began to support substantial research.

The visual culture of SIGGRAPH changed markedly over this period because of these changes, shifting the aesthetics and function of tech demos to serve media industries. The computer graphics tech demo had been an important part of the computer science world since Engelbart's "mother of all demos." Furthermore, SIGGRAPH has always made space for experimental computer art. But the involvement of media industries like Hollywood created different forms of visual culture that did not ft neatly into these categories. Demo reels by 3D animation studios that mostly made short commercials or 3D logos such as Robert Abel & Associates and Pacifc Data Images began to appear from 1980 to 1983, as did clips from features like *Star Trek II: The Wrath of Khan*. New venues for visual culture like the Computer Animation Festival, started in 1984, might feature artistic experimentation, tech demos, demo reels, or clips from flms. The aesthetics of animation also changed. While long shots with 3D moving cameras were the norm in the early 1980s, shorter shot lengths and continuity editing techniques borrowed from cinema starred to become more common.27

Chris Landreth's work for 3D software company Alias | Wavefront is an interesting example of how art, tech demo, and demo reel were blurred together at SIGGRAPH. Although he is now well known for his Canadian National Film Board short *Ryan* (2004), Landreth is a trained engineer who did research in fuid dynamics at the University of Illinois. When he moved to Alias | Wavefront he had the opportunity to make short demos as a function of testing and development. His demos were unique because they betrayed his artistic aspiration, and this aspiration proved well suited to the new hybrid logic of SIGGRAPH's visual culture. *The End*, his frst work to be featured at SIGGRAPH in 1995, is an ironic parody of selfrefexive modernist conventions, yet Alias |Wavefront made it for the purpose of exhibiting new facial animation techniques. His next short *Bingo,* exhibited in 1998*,* which promotes the company's new Maya animation suite, is even more artistically ambitious. Based on a play by experimental theater group the Neo-Futurists, this supposed tech demo is a work of existential absurdity and grotesque surreal imagery. Characters are naturalistically rendered with shadows and textures, yet they are also squashed and stretch in cartoonish ways. One character is made of human fesh stretched into the shape of a tree. These early works by Landreth ft inbetween what one would expect from a flm festival, an academic conference, and an industry trade show. While we might think of them as the singular works of a creative individual given too much autonomy, the fact that he was able to make furthermore elaborate demos suggests that his work served a useful promotional function for Alias |Wavefront. Their hybrid nature suits exactly the paradigm created by the ever-increasing infuence of media industries at SIGGRAPH. This is the enfolding, integrating effect of the complex, just as the critics of the military-industrial complex like Fulbright described. Media industries like Hollywood transformed the "spirit" of computer graphics research and the tools it produced.

Since the early 1980s, SIGGRAPH has developed into an interdisciplinary conference with members including "researchers, developers, and users from the technical, academic, business, and artistic communities."28 Companies as diverse as Lockheed Martin, Boeing, Apple, MTV, Swedish Energy Company, NVIDIA, and Symantec have sponsored events and recruiting tables at the conference. These represent a diversity of industrial and cultural applications for computer graphics. But of all these varied industries, the flm industry has played an outsized role in how SIGGRAPH has changed since the 1980s.

Focusing on organizations like SIGGRAPH and the institutions they connect runs counter to popular narratives about the history of computer graphics. The public, and even some historians, frame innovations in computer graphics technology as appropriations that resulted from individual tinkering, détournement or appropriations of military technology. This is the ideology of the hackers and maverick entrepreneurs of Silicon Valley. This way of thinking is at work in the self-promotion of companies like Pixar and it can also be found creeping its way into histories of computer graphics such as Tom Sito's *Moving Innovation: A history of computer animation*. Sito acknowledges that computer graphics emerged from academic research and military programs, but from there he seems to see computer animation following its own trajectory. Sito describes early computer graphics innovators as "oddball scientists who looked at the huge mainframe computers of IBM and Honeywell and thought, let's make cartoons with them."29 For this reason he focuses on fgures like Ed Catmull, a person whom he sees as singularly driven toward making cartoons with computers. Yet what this approach neglects is the necessity of sustained relations between different industries and research institutions. Catmull has made great contributions in retrospect, and he clearly had a vision for computer animation. But without public institutions, without the interest of a robust established media industry, and without SIGGRAPH to coordinate these different bodies, efforts like his would have been a subject for curious media archeologists, like Nikolay Konstantinov's experiments with computer animation in the 1960s in the Soviet Union. Such examples are important, but they cannot explain the dominant norms of an industry like Hollywood or Silicon Valley. VFX and animation companies do not pluck technologies out of thin air or appropriate them ready-made from unrelated felds. They are constantly involved in the development of tools made specifcally for their needs. Looking at the development of a particular computer graphics technology over time helps to make this clear.

#### Developing Fluid Simulation for Animation

Fluid simulation is a technology that was initially developed in the militaryindustrial-academic complex before being transformed into nonlinear animation tools by Hollywood's R&D complex. This was not a question of Hollywood simply importing tools developed for other purposes. Rather, the VFX and animation industries built their own research infrastructure, funding basic research, developing relationships with universities, supporting research labs, and employing researchers. Through this they developed their own technology. Indeed, some of the very earliest research into visualizing fuid simulations was done to achieve a visual effect. Looking longitudinally at the development of fuid simulation animation tools demonstrates how this R&D complex took shape, and how it replaced military sources of funding (Fig. 3.2).

The history of fuid simulation begins with hydrodynamics: the study of the forces acting in fuids. Hydrodynamics is in some ways a very old discipline. Irrigation and aqueducts require some ability to predict how fuid behaves, and these are as old as civilization itself. Polymath Leonhard Euler formalized the frst theory of fuid dynamics in the mid-eighteenth century. His work provided an equation that understood the dynamics of fuid through factors like pressure and momentum. Further work by physicists Claude-Louis Navier and George Gabriel Stokes in the nineteenth century added nuance and new factors like viscosity and thermal conductivity. The *Navier Stokes equation* they developed continues to be the essential standard for calculating the varying factors that affect the dynamics of fuid movement. But doing these complex calculations continues to be diffcult. As Chap. 2 noted, the movement of fuid is a nonlinear problem, where an outcome cannot be determined based on initial conditions, thus it is a prime candidate for simulation.

The frst research into computational fuid dynamics was conducted under the aegis of the Los Alamos National Research Laboratory. In fact the frst publication of the Monte Carlo method, the frst nonlinear simulation, was a 1953 paper on fuid simulation research.30 The T3 (Third Theoretical) Group at Los Alamos, headed by physicist Francis Harlow, conducted the majority of early work from 1955 to 1971.31 The T3 group took concepts like the Navier Stokes equations and made computer

**Fig. 3.2** Fluid simulation research tools and applications

simulations of fuid dynamics. Through their work they produced mathematical methods for modeling fuid such as Particle in Cell (PIC), Implicit Continuous Field Eulerian (ICE), and Lagrangian Incompressible (LINC). Many of these continue to be used in fuid simulation software.

Early fuid simulations were numerical; they were not visualizations that looked like the things they were simulating. Hollywood played a key part in supporting initial research in the mid-1980s that addressed the particular challenge of visualizing fuid simulations, which entailed working out how to make 3D models of undulating surfaces and how to make simulations less resource intensive. The frst fuid simulation research presented at SIGGRAPH was not military or scientifc, instead it was for a VFX sequence in the sequel to Stanley Kubrick's *2001: A Space Odyssey* (1968), *2010* (1984). The effect in question was a 2D simulation of the swirling atmosphere of the surface of Jupiter. Researcher Robert Myers worked on this project as an employee of Poseidon Research, a company that usually worked on military projects. The other two authors were employees of VFX studio Digital Productions, but one of them, Larry Yaeger, was former employee of Grumman Aerospace.32 Thus, while one can already see the flm industry's infuence on fuid simulation research, the military-industrial-academic complex was also part of the picture.

A military R&D company, an Apple Computer R&D group, and a few research universities sponsored the next papers on fuid simulation for computer graphics. The frst of these was a paper on simulating large calm bodies of water in 3D by Stanford electrical engineering PhD Michael Kass and Cambridge computer science PhD Gavin Miller at Apple Computer's Advanced Technology Group.33 Other similar work soon followed by computer science researchers at George Mason University and the University of Central Florida.34 Hollywood flms like *Waterworld* (1995) and *Titanic* (1997), where VFX studios created large areas of relatively calm water, exemplify this era in fuid simulation of the early-tomid 1990s.

The two major tools that implemented this simulation technology were Alias | Wavefront's Dynamation (a subsidiary of SGI, formerly Silicon Graphics) and Arete Entertainment's Digital Nature Tools. Arete was founded in 1976 in response to a call from the Department of Defense for new sensor technologies. Their research involved using computer simulations of fuids to detect the presence of an object by observing the perturbations the object made in a fuid medium. Searching for new markets in 1996, they managed to catch on to a new demand in computer graphics for naturalistic looking water. Arete merged with German VFX studio SZM and developed new products specifcally for animation and VFX such as Arete Image Software and the Digital Nature Tools plug-in. Their presence is quite evident at SIGGRAPH during this period. They were a sponsor and hosted a recruiting table, their researchers presented their work in publications, and their technology was on display in technology demonstrations. Arete is a weathervane for a general shift from military to media industries at SIGGRAPH. They did not seek out the VFX and animation industries because they had a dream of making cartoons as Ed Catmull is supposed to have. They simply sought out opportunities and research funds. When one source of revenue dried up, they sought a new one.

The next major developments in fuid simulation were supported by yet more research universities and further research by Alias | Wavefront. University of Pennsylvania researchers Nick Foster and Dimitris Metaxas based their 1996 research paper on work done by Francis Harlow and Eddie Welch at Los Alamos some thirty years earlier, adapting exotic scientifc concepts from 1965 so that they could make animations on conventional hardware quickly.35 While Foster and Metaxas' work was a major advance in terms of physics accuracy (and thus naturalism) it was still relatively resource-intensive and unstable. One key issue that many researchers have noted is that greater scientifc fdelity often comes at the cost of being able to modify or customize a simulation.

Three years later a researcher at Alias | Wavefront, Jos Stam, published an approach at SIGGRAPH that was less resource intensive and more "interactive."36 In other words it was more apt to handle external inputs without causing the simulation to collapse. Thus, not all research was directed toward an ideal of scientifc realism. These contributions made robust fuid simulation much more practical and economically viable. The increased interactivity of this technology also meant artists could go further in manipulating a simulation to get the look they wanted. This push toward the *directability* of simulations, to make them more controllable, became a key development goal after Stam's work, to the point that it rivaled the quest for realism.37

These developments lead to a proliferation of fuid tools both produced in-house at the big fve VFX studios and by independent software companies. These include the Maya Fluid Effects System, Next Limit's Real Flow, and on the studio side, Pacifc Data Image's FLU, Rhythm and Hues' Fluid Dynamics Tools, ILM's OCEAN and Digital Domain's FSIM. One can see this era of fuid simulation technology at work in the VFX and animation spectacles of the late 1990's and early 2000's, from the droplets of water in *Antz* (1998), to the devastating waves and storms in *The Day After Tomorrow* (2004) and *The Perfect Storm* (2000).

Through the 2000s, researchers continued to propose new techniques that offered a higher level of realism, were less resource intensive, or allowed a greater degree of directability. The most successful approaches tried to achieve all of these traditionally contradictory demands. For example, Rhythm & Hues researcher Jerry Tessendorf's *fast-Fouriertransform* technique for animating oceans was so effcient it could be used in real-time for gaming applications.38 Tessendorf had actually worked at early fuid simulation software company Arete Entertainment before coming to VFX studio Rhythm & Hues. Another substantial contribution in this era came from Stanford mathematician Ron Fedkiw, who took his mentor mathematician Stanley Osher's *level set* approach for the numerical analysis of curved shapes and applied it to fuid simulation geometry.39 He developed this approach while working both as a professor at Stanford and as a consultant at ILM.

Up to this point, animating something like a churning ocean would involve a composite of many different techniques, some to do the waves, others to do the spray, and others still to do the foam on the surface of the waves. One persistent challenge has been that fuids tend to not scale well, requiring different tools for small splashes versus big waves.40 One of the primary foci of recent research has been to create tools that work on all scales.

In 2008, Robert Bridson (a researcher who trained under Ron Fedkiw at Stanford and currently teaches at the University of British Columbia) helped build a fully scalable simulation technology for ILM called the *fuid implicit particle* technique.41 Together with his business partner and fellow fuid simulation researcher Marcus Nordenstam, Bridson formed the software company Exotic Matter and released a product based on this method called Naiad. The blockbuster *Battleship* (2012), which provided the capital for ILM to do signifcant R&D, offers an example this latest era of fuid simulation technology.

The implementation, and indeed the basic research, for many of these technologies were funded by a single flm. In these cases, a VFX studio or a software company employed one of these scientists to do custom work for a specifc effect in a specifc sequence. For example Ron Fedkiw is credited as a "fuid simulation engineering" on the fopped blockbuster *Poseidon* (2006), Robert Bridson is credited as "research and development" on *The Hobbit* (2003), and Jerry Tessendorf is credited as "principal graphic scientist" on *Superman Returns* (2006). All these researchers were professors at research universities while they were doing this work and most published their results. Many of their academic presentations and papers contain illustrations from the flms they worked on. The proliferation of fully rendered Hollywood animation and VFX sequences is yet more evidence of the infuence of the flm industry on SIGGRAPH.

Though many early fuid simulation researchers working in computer graphics have computer science backgrounds, many were also drawn from other felds. Jerry Tessendorf started out with a PhD in physics from Brown before moving on to work at Arete, which then changed from doing military research to doing VFX and animation research. Next, he moved on to VFX studio Cinesite, then Rhythm & Hues where he was "principal graphics scientist." John Anderson was a professor of atmospheric sciences at the University of Wisconsin-Madison before he became the head scientist behind nonlinear animation at ILM in the late 1990s. Mark Stasiuk, co-founder of nonlinear animation studio Fusion CI, was working on the fuid dynamics of volcanic eruptions before getting into VFX and animation. Kenneth Museth, former principal engineer at Dreamworks and researcher at Digital Domain, started with a PhD in quantum physics. Before working in VFX he did "trajectory design" at NASA's Jet Propulsion Laboratory (JPL) and he also worked for private space company Space X. During his tenure at Digital Domain, he helped develop solutions for simulating fuid that would become their proprietary simulation software FSIM and STORM. Like so many other researchers, Museth is a tenured professor as well. The fact that so many accomplished researchers with a diversity of backgrounds have started to work in Hollywood demonstrates how much capital investment has moved into R&D and how strong the media industry's infuence on this feld of research has become.

All these researchers have been prolifc at SIGGRAPH, producing research that has led to new software and computational concepts, which in turn lead to new kinds of moving images. Many of these researchers also count an academy award for science and technology in addition to their many academic achievements. This award is a clear sign from the industry that simulated fuids have had a substantial infuence on the way movies are made.

This frst generation of researchers that came from a variety of backgrounds are now advising graduate students who work specifcally on computer graphics animation problems. Many of these early researchers have established labs in computer science departments that help train graduate students to do fuid simulation for media industries. The University of Toronto's Dynamic Graphics Project, where Jos Stam works, has had strong connections to Alias | Wavefront and its successors. Stanford Computer Science Department's PhysBAM program, headed by Ron Fedkiw, makes up the core of ILM's simulation technology.42 As a result, younger researchers in this feld tend to come from some other disciplines like geology or quantum physics less often. This is not the end of interdisciplinary or inter-industry exchange, but rather a sign of the maturity of the feld, and of the infuence of media industries research funding.

Hollywood thus did not pluck animation technologies like the ones described here from other industries or from the military-industrial complex. Rather, Hollywood drives its own R&D. Just as Sen. William Fulbright once observed that scientifc research was being shaped by the substantial demand for military R&D, a remarkably similar transformation has taken place with the VFX and animation industries. This technological change was not an external force being exerted on the flm industry; it was an internal, directed force, shaped by the demands of the industry.

#### Blockbuster Technology

The history of fuid simulation demonstrates Hollywood's computer graphics R&D complex at work. Money was fowing from Hollywood for research projects and for jobs, and new computer labs were emerging to do the research and train future workers. In some cases, this new source of research funding replaced the role the military used to play. This situation resulted in part from the aforementioned switch from federal R&D funding to tax credits for private research. But on its own, this does not explain where the money came from. It was only when the Hollywood blockbuster met the logic of R&D that Hollywood's R&D Complex properly took shape. With its particular economic model and its strategic relationship to technological change, the blockbuster proved a perfect ft for supporting R&D. This institutional and economic confguration is every bit as important for understanding changes in flm production over the past three decades as any discussion about the nature of digital technology itself.

Julian Stringer notes that the term blockbuster means different things in different contexts; it can be a term of derision or praise, a planned hit or a "sleeper."43 The majority of scholarship thus far on the subject has focused on the blockbuster as a planned must-see event: a big flm. In New Hollywood flms like *Jaws* (1975), this meant a nation-wide simultaneous release coordinated with a TV ad campaign and the promotion of opening weekend box offce fgures in following weeks.44 According to Anita Elberse, the logic of blockbusters is common to "entertainment markets," from sports to books to television, and the rise of digital technology has done nothing to disrupt this logic.45 Her analysis of the flm industry shows that even though big flms are risky ventures, on average they produce bigger returns than smaller budget flms.46 Consumers can only be aware of so many flms at a time, so it makes sense to utilize stars, special effects spectacles, and marketing to make a few special flms stand out from the rest.47

Studios frequently pair blockbuster spectacles with a new technology of presentation. Steven Neale and Sheldon Hall note that ever since the "special" and "super special" flms of the 1920s there have been "large scale, high cost" flms that feature special distribution strategies, epic content, spectacular images, and new technologies.48 They see continuity between the special and the contemporary Hollywood blockbuster. The scale of the spectacles in road show features like *Ben-Hur* (1959) and *The Ten Commandments* (1956) went hand-in-hand with Camera 65 and Vistavision anamorphic technologies. As Neale writes, "one of the elements that affects both (the blockbuster's) cost and their presentation is their deployment of expensive, up-to-date technology."49 This might include novel special and visual effects, or also some sort of novel technique for exhibition such as Cinemascope, Technicolor, synchronized sound, or 3D.

The centrality of the blockbuster, and the exclusivity of its "representational prowess" might be an important component of what makes a flm proftable in ancillary markets and secondary distribution, but its centrality and exclusivity also carry a tacit meaning.50 Paul Allen argues that the Hollywood blockbuster allows studios to promote new technologies and effectively "renegotiate the industry status" of Hollywood.51 In other words, these flms author what Hollywood cinema is, and what it will be in the future. "Only a blockbuster – big, expensive, star-laden- could hope to carry the weight of expectation that a major new type of cinema technology brought with it."52Allen believes this logic is at work in the aesthetics of blockbusters. In flms as diverse as the *Jazz Singer* (1927) and *Jurassic Park* (1993) there are moments that are given over to pure spectacle, and in these moments of suspension a new technology that is being positioned to transform the industry is put on display.53

Allen's account of the Hollywood blockbuster is consistent with interpretations of special effect aesthetics by other scholars. For example, Dan North builds on the idea of the incredulous spectator from Gunning and Gaudreault's concept of the "cinema of attractions" to argue that visual effects are meant to be recognized and enjoyed as illusions that speak to the nature of technological mediation.54 Special effects are about the medium, they are about the illusion of cinema. A well-positioned and well-designed blockbuster can even renegotiate the status of consumer media technologies. As Charles Acland observes, blockbusters such as *Avatar (2009)* function as "technological tentpoles," that set into place new protocols for consumer and professional technologies.55 *Avatar* was designed to introduce Stereoscopic 3D as a new standard for spectacle flms, both in theaters and in homes. It successfully drove a range of technological adoption, from camera systems (designed by James Cameron) to digital cinema systems and consumer electronics (though the latter eventually fzzled). The Hollywood blockbuster therefore is both symbolically and economically positioned to be the site where important technological changes take place.

The blockbuster is a tool available only to the wealthiest studios, as it requires mountains of upfront capital. Blockbusters are also a competition though, as each studio strives to put forward their vision for the industry. R&D is a similar bet to a blockbuster. It is an upfront cost that only certain companies can afford, made in an effort to gain some sort of competitive edge. It keeps the powerful in power and it enables them to defne the industry. R&D also produces new technologies that provide precious visual novelty to attract blockbuster audiences. The logic of the Hollywood blockbuster thus has some important synergistic correspondences with the R&D. Its voracious hunger for visual novelty and technological display proved to be a perfect ft for the established military-industrial-academic R&D infrastructure of computer science.

The blockbuster remains relatively unchanged over time in its function. Yet the logic of the R&D complex represents something new. While prior blockbusters hinged on new technologies like widescreen formats, sync sound, or color, the budgets of the flms themselves did not support the R&D that created those technologies. It is possible there were such cases, but it would be uncharacteristic of the time. R&D represents a new way to use fnancial scale in the interest of competition.

#### "Hollywood Software" and the Cost and Risk of R&D

While the Hollywood blockbuster is perhaps the most visible place where computer graphics R&D started to transform parts of Hollywood in the 1980s, the economic and organizational signifcance of R&D does not stop there. Once a VFX or animation studio develops a new image-making tool for a given project it leads to several strategic and economic implications. Defending IP ownership and profting from it has become a signifcant part of animation and VFX studios' operations.

According to former Pixar CFO Lawrence Levy, before his arrival at Pixar in 1995 the company did not know how to make money. Steve Jobs was hoping that Pixar would make 3D animation like desktop publishing and that they could sell their software to millions of computer owners. At that point however, they were only selling their products to flm and animation studios. This was a diffcult business to be in because it meant their marketplace was very small. In Levy's words, "…when studios are making flms with special effects they need lots of Renderman… otherwise, they don't need it at all."56 Pixar's self-image is that of a company that always knew it wanted to be a studio. But from Levy's account it is clear that at one point they did not know whether they were a technology company or an animation studio. In truth they are still both, as are most of the top animation and VFX studios. Although Pixar markets itself as a studio, it still earns a great deal of revenue from selling its technology. It is telling that Levy's frst consequential move at Pixar was to threaten to sue rivals Silicon Graphics and Microsoft for using their proprietary technology without permission.57 That single move brought in millions in annual revenue. To this day, their technological IP is extremely valuable to them. In flings to the Securities and Exchange Committee (SEC) Pixar and rival Dreamworks cite technological IP as a key assets.58 VFX studio Digital Domain assessed the value of its IP in 2017 as 7% of its total value.59

To try to quantify how much R&D VFX and animation studios do, I conducted a study where I searched the records of the US Patent Offce for the names of the largest studios in operation today (see Fig. 3.3). This does not show us the patents of studios that have closed, but it does offer a longitudinal image of contemporary studios. The data shows that Pixar was an early leader in patents, and that it has continued to lead the way, peaking in 2008 with about 48 patents awarded that year. Since 2003, several other studios like Dreamworks and Digital Domain have been fling many patents per year on average. When comparing large animation studios to large game studios, the number of patents seems be about equal when you factor for the scale of their revenues.60 While this approach does not account for any secret technologies, or technologies registered to parent or holding companies, it provides clear evidence that R&D is a major

**Fig. 3.3** Total patents fled by top VFX and animation studios

activity for these studios, and that research activities have generally risen over time.

Animation and VFX studios do not build all of their own technology, of course. During the 1980s and 1990s many studios would have used Silicon Graphics hardware and software as a basic platform, and more recently they would all use software like Autodesk's Maya for basic modeling and animation. Often specifc challenges in a given project can also be solved with licensed or off-the-shelf software. In their best practices manual, the Visual Effects Society (VES) addresses this problem in a section titled "To Build or Purchase?" written by Stephan Vladimir Bugaj, Pixar's technical director. Bugaj's advice to readers is if you can buy software you probably should, as it can be risky and expensive to build custom software.61 Furthermore, anything a moderate sized VFX or animation studio can build will be eclipsed by the work of larger studios with more resources. However, if expectations and budgets are high, there are also good reasons for large studios to invest in R&D.

Often it makes more sense to keep that technical labor in-house and build custom technology. As a technical director at Disney Animation Studios told me, "It's best to have in-house developers, in a sense that support and development time is much more rapid and fows naturally, as your artists and technical directors are working in the same place as software developers. This means rapid prototyping, integration, testing, execution, and support."62 In short, if a VFX or animation studio is large enough to afford in-house R&D, then they can offer services that smaller studios simply cannot, and they can fold R&D process into the production process more effectively. Furthermore, some projects call for a high degree of visual novelty, especially so called "hero" effects in blockbusters that are designed to draw the attention of audiences. This has been particularly true of nonlinear animation throughout much of its history. Looking at the promotion of a flm like *The Day After Tomorrow* or *The Perfect Storm*, it is clear that the spectacular uncanny appearance of the gigantic simulated waves in those flms was a key selling feature.

If a studio chooses to build their own technology, there are also potential fnancial benefts down the road. R&D produces assets, and those assets have multiple kinds of value. For one, animation and VFX studios all note the value of technological exclusivity in their business.63 Exclusivity allows studios to produce images no one else can, but it can also serve a strategic value. Keeping competitors from having access to a technology can raise their operations cost, a key factor in the ultracompetitive VFX bidding system. This is especially devastating when a company withdraws a technology that was formerly available, forcing their competitors to scramble to build their own. Studios cite this as a major risk in both the animation and VFX industries.64

The second major source of value R&D can produce is through technology licensing. Companies such as Pixar and Digital Domain list millions of dollars in annual revenue streams from licensing.65 Sometimes licensing is as simple as taking money from a company who has infringed on your studio's patents. Lawrence Levy's move at Pixar is a good example of this. Many companies were using ray tracing to render 3D images without Pixar permission. They were not necessarily using Pixar's software, but they were using an idea Pixar owned. Thus, Levy was able to threaten to sue them and to start charging annual licensing fees from them. In other cases, licensing is a much more full-service contract. Companies will pay not just to use software, but also to receive help implementing the software and to receive ongoing support. Here the line becomes somewhat blurry between doing contracted production work and software licensing. The difference between a studio and a software company can thus be indistinct. For example, Fusion CI, a nonlinear animation company cofounded by geophysicist Mark Stasiuk, offers custom solutions for nonlinear animation that utilizes their proprietary technology.66 It is diffcult to say exactly whether they are a software company or a VFX studio. But this is true of many animation and VFX studios that conventionally style themselves as studios. Digital Domain, for example, gets approximately one third of their revenue from licensing software and doing specialized sub-contracted work.67

The third way R&D can produce revenue is through selling technology as part of an off-the-shelf product. Again, the lines between licensing technology and selling software are blurry here, but the distinction basically boils down to target market and volume: a studio can charge select studios large amounts for a custom technology, or they can develop a sleek userfriendly adaptable piece of software and market it to any would-be animators in the professional and prosumer markets. One example is Disney's Xgen hair simulation software, which they are publishing in association with software giant Autodesk.68 The history of fuid simulation is full of examples of technologies that started in studios and ended as off-the-shelf solutions. In some cases, an individual researcher will leave a studio or software company to found their own company to offer these services. Robert Bridson, the researcher who developed a fuid implicit particle method for fuid simulation, started doing work for ILM but ended up co-founding Exotic Matter. Eventually Autodesk bought Exotic Matter so that it could build their technology into the Maya software suite.

There is a pretty clear progression in cases such as these. A researcher with a background in physics, mathematics, or computer science does some fundamental work as a graduate student or postdoctoral fellow, perhaps in a lab with connections to a studio or software company. Next a studio hired them to do specialized work for a specifc type of animation for a blockbuster spectacle. Over time the feld moves forward, software become refned and more effcient and computer power becomes cheaper, and eventually the technology goes from being an exclusive property deployed for spectacular effects to being something anyone can purchase for a few hundred dollars and implement into their production pipeline without much diffculty. This is a system that is always producing the new, and one that creates potential for proft at every step from the emergent to the dominant, until someday becoming what Charles Acland refers to as "residual media" of the past.

A clear sign of how valuable R&D can be is how carefully its products are protected. Dreamworks writes, "Our revenue may be adversely affected if we fail to protect our proprietary technology or enhance or develop new technology… We rely on a combination of patents, copyright and trade secret protection and nondisclosure agreements…"69 Indeed litigation against infringers seems to be a way of life for many studios. Digital Domain notes multiple ongoing lawsuits in its annual flings. These companies must also be careful to not infringe on other potentially litigious competitor's IP.

While R&D can produce extremely valuable products, it can also be extremely risky. The nature of R&D mediates between the unpredictable exploratory nature of science and application-oriented nature of engineering and design. All R&D is uncertain to some degree. Materiality asserts its agency through its implacable affordances and limitations. Thus, if a VFX studio signs a contract to complete a shot with a technology they have not built yet, they are exposed to considerable risk. Problems could very easily present themselves that make development much harder. What they set out to do may turn out to be impossible. This is something VFX studios openly acknowledge. According to a 10-K public report fled to the SEC by Digital Domain in 2011, which lists the sources of revenue, costs, and potential risks of the company, R&D is a major source of fnancial cost and risk.70

Other industries that make use of technology often outsource much of their technological heavy lifting. This is a point VFX industry veteran Mike Seymour makes in his trade website *VFX Insider.* Apple, for example, puts a lot of research into the design of their products but they do not build the actual components of their products. Instead, they coordinate closely with manufactures who might offer a custom or off-the-shelf solution.71 Different generations of Apple's iPhone contain processors, memory, and LCDs from various different third-party suppliers like Samsung, Foxconn, or Qualcomm. One might contend that all tech companies by defnition develop technologies. But what is less common is for a company to develop the most basic building blocks of their technology. By contrast, VFX and animation studios have the unique challenge of developing some of the most basic technological components of the products and services they sell. The uncertainty and contingency of the research these businesses support is both immensely valuable, and immensely risky. It affrms how uncertain R&D can be, but it also demonstrates the potential value of shaping it.

Clearly there are some industries where it is common for companies that deliver a fnal product to do extensive R&D. The pharmaceutical industry is one example. Perhaps Seymour overstates the situation somewhat, but his observation points to the way R&D is connected to the volatility of the VFX industry. This is an issue that has been the subject of considerable discussion since the bankruptcy of Rhythm & Hues and in the related labor organization of VFX workers. There is a general perception in the industry that there is something fundamentally defective with the way they do business.

The VFX industry is built around fxed contracts and competitive bidding. In the majority of cases, the flm studio does not factor in the details of what R&D will be required for a VFX job in initial planning stages. If an effect requires a new technology, it will be the VFX studio's problem. This may sound like a controversial statement, given how fundamental technology is to the blockbuster. If a flm studio employs their own VFX Supervisor, they will have some vague idea of how much work a given shot will take.72 Furthermore, iconic flms like *Avatar* (2009) seem to factor extensive VFX technological development in early planning. Publicity for *Avatar* touted that James Cameron had to wait a decade until technology was suffciently advanced to flm his screenplay for *Avatar*. 73 But the flms helmed by techno-auteurs like Cameron or George Lucas are exceptional cases. In the case of an average blockbuster in the past few decades, planning and budgeting for R&D is entirely the VFX studio's responsibility.

Once a VFX studio is invited to bid on a project they will go through a *breakdown* of the flm, approximating the costs for each shot. Bidding VFX studios are relatively opaque in their proposals; all the studio sees is the price per shot.74 A number of costs are folded into this single number, including facilities costs, labor, and R&D. In other words, the contract between the flm and VFX studio does not say "these shots will require us to invent a new way of animating water, so they will cost this much," they simply state how much the shots will cost. The problem with this is that R&D is intrinsically uncertain and risky. This has prompted VFX supervisor Ben Grossman to advocate for a new model that separates technology building from animating and overhauls the VFX bidding process.75 Under this plan production and technology development would be done by separate companies.

Although people like Seymour and Grossman might see the VFX industry as dysfunctional, its logic is certainly self-perpetuating. Just as the upfront costs, scale, and horizontal integration of the blockbuster ensure that it is a type of flm only available to the largest conglomerated studios, the demands of R&D ensure that only the biggest VFX studios will be the ones winning the largest contracts, and only the largest animation studios will release features with cutting edge animation. The spectacle of rarefed technology ensures the maintenance of this system. It also ensures that the studios large enough to do extensive R&D can utilize their technological IP either to license it out or strategically maintain exclusivity. It is a way for the studios currently at the top to stay at the top.

These calls to separate production from technology from people in the flm industry are telling because they point to just how imbricated the two have become. It is diffcult to imagine the two ever being disentangled at this point. Indeed, the following chapter will demonstrate how in the case of nonlinear animation it is diffcult to tell the difference between animation work and engineering work.

The animation and VFX industries have become such a force for R&D that other felds and industries use and adapt their tools. In an interview on digital technologies in architecture, engineer and architect Chuck Hoberman describes how he used "Hollywood software" to design the complex folding spheres he is known for.76 His use of this term demonstrates what an extensive machine Hollywood's R&D complex has become for producing computer graphics technology. The ability to render photorealistic images is clearly useful in architecture as well as a variety of other industrial and educational applications. But the utility of Hollywood software even goes beyond rendering. For example, nonlinear animation tools can provide the basis for rudimentary scientifc simulations. Fusion CI co-founder Mark Stasiuk used to make plug-ins for the popular nonlinear animation software Realfow to study geophysical fuid mechanics.77 In recent years, Dreamworks has raised thirty fve million dollars as part of an initiative to sell software to new industries, and Digital Domain has sold their technology to companies such as Samsung.78

The flm industry thus does not rely on readymade generalized computer graphics tools. Instead, it has played a key role participating in computer graphics R&D since the 1980s. Engaging the military-industrial-academic complex that was already established, Hollywood slowly began to fund its own projects and shaped research toward its own ends. Tireless promoters like Pixar's Lawrence Levy might style this R&D turn in VFX and animation as the result of their bringing "Silicon Valley bravado to the flm industry." Pixar indeed plays a pivotal part in this history, and it is a paradigmatic product of the Dot-com boom. But much of what is implied in the Silicon Valley discourse is easily refuted. Far from being the product of entrepreneurial mavericks, the rise of the logic of R&D extends from the military-industrial complex, and it is sustained through cooperation with researchers at public institutions and public funds. Furthermore, it is hardly a feld in which one start-up can disrupt the whole industry. The high cost and risk of R&D ensures the maintenance of the economic status quo. It would perhaps be more accurate to say that VFX and animation resemble the reality of Silicon Valley rather than its lofty discourse; given the way tech companies like Amazon, Alphabet, and Facebook dominate their respective markets.

#### Notes


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

### Engineering Moving Images: "Tech Dev" Meets "Look Dev"

The past chapters have covered the way R&D has become a vital part of studio economics and how R&D, computer graphics, and nonlinear animation share a complex interlinked institutional history. Chapter 2 also proposed a theoretical framework for understanding nonlinear simulation as a form of animation that hinges on engineering speculative models, as a kind of R&D experiment. All of this points toward a convergence between technology development and animation. The question remains whether any of this is evident in production practices and studio organization over time, or whether R&D and animation production work have remained separate domains. The screen credits for contemporary animated and VFX-laden features today have whole sections dedicated to explicitly technical staff, and Chap. 2 established how studios began to employ workers with titles like "R&D" and "Principal Graphic Scientist" in the 1980s. But all this technical and R&D activity could be insulated from the rest of production, with artists clearly separated from technicians and researchers. This chapter will go through several examples that show production did in fact become imbricated with R&D starting in the 1990s, leading to even more unusual screen credits like "R&D Artist."

Understanding the role of R&D in production is fundamental for understanding changes in production labor since the introduction of digital technology. Scholars like Tony Tarantini, Hyejin Yoon, Edward J. Malecki, and Hye Jean Chung have noted the role digital technology has played in changes in training, hierarchies, and the internationalization of labor as part of broader post-Fordist trends.1 Technology was not just an external force acting on these media industries though. It was being produced by studios for specifc purposes. Thus, it is important to understand how technology development has become integrated into production practices as an internal force of change.

As studios began hiring researchers, sponsoring research, creating connections with institutions, and participating in scholarly conferences in the 1980s, it makes sense that R&D would start to play a role in production work itself. At frst this would have only been the case in very specialized, rarefed effects sequences, like Loren Carpenter and William Reeve's work creating bespoke graphics for the "genesis sequence" in *Star Trek II: The Wrath of Khan*. Over time it would become more and more common for R&D to play a direct role in production. Technologies like the nonlinear animation work Carpenter and Reeves were doing further require us to rethink the relationship between animation and automation. They were making software in order to make an image. All nonlinear animation "FX" work since has, to a greater or lesser extent, followed in this same logic. An FX artist makes animations by making simulations that produce unpredictable movement. They do this by choosing the right software and plugins and making them work together, writing scripts in a given program's language, and manipulating the parameters of a simulation. All this work sits in a liminal space somewhere between animation production and technical work. Nonlinear animation thus provides a particularly good example of how R&D and production have overlapped.

The Cold War logic of technology development has also very clearly had an infuence on the general organization of VFX and animation "workfows" since the 1990s. Workfows have their roots in the concept of project management, which was frst developed as a way to keep US nuclear missile programs ahead of their Soviet rivals. This idea was then carried further by sectors like the automotive industry as a way of organizing the development of new products, called "product development." The design of VFX and animation workfows is also heavily infuenced by software development principles, especially a school of software development referred to as "agile" founded in 2001, which emphasizes fexibility, reconfguration, iteration, and customization. These organizational paradigms for developing technology all played an important part in shaping development-oriented animation and VFX production workfows. Today the term "development," often shortened to "dev," is not just used to describe technical work like software development, R&D, or "tech dev," but also production work like "look development" or "look dev," which can refer to an iterative process of design refnement or a specialized job where artists shape lighting and rendering style.2

The implementation of these workfow paradigms required the extensive engineering of the production "pipelines" that connect different production processes to each other and enable the "development" of an animated sequence. These pipelines are constantly being rebuilt for different projects, enabling agile development, and further blurring the line between production work and technical work.

These principles of fexibility and agility did not emerge in a vacuum. They have strong correspondences to post-Fordist fexible accumulation, and they have led to a highly precarious labor system that sees workers moving from one six-month contract to the next. These factors need to be considered together. The shift from large-scale Cold War federal R&D programs to private industries, the rise of fexible approaches to project management, and the political-economic turn toward deregulation and entrepreneurialism all go hand in hand. This chapter will address each of these points in turn, starting with the rise of workfows, project management, and software development, moving on to the correlated emergence of software pipelines, then to nonlinear animation practices, before fnally refecting on how the gradual disappearance of the line between engineering and production has affected labor and the construction of worker subjectivities.

#### Project Management, Product Development, and Agile Development

Since 1990, VFX and animation production have increasingly been defned by the dual concepts of workfow and pipeline. In the common quotidian parlance of these industries, workfow and pipeline refer to all the work that needs to be done in order to achieve a fnal product. Although people in the industry often confate these two terms, each have important distinct technical defnitions. The Visual Effect Society handbook defnes a workfow as "a specifc set of procedures and deliverables that defnes a goal."3 Workfow describes each stage of a production, all the jobs that need to be done to ship the fnal product. Pipelines are the technical infrastructure of data exchange that makes workfows possible.

As many scholars have observed, major flm studios, such as those in the Hollywood or Weimar flm industries, operated like factories.4 David Bordwell, Janet Staiger, and Kristin Thompson argue that factory-style management is a key component for understanding the classic Hollywood studio system, even its aesthetics.5 The concept of workfow in animation and VFX is similarly borrowed from industrial management theory. But there are some important differences between the assembly-line style of twentieth-century studio flm production and these more recent approaches to production. While the former focuses on building a regulated and reliable system for outputting one product after another, the former treats each flm as a discrete project. In a sense animation and VFX studios re-tool the factory for every flm.

To understand this distinction in its simplest terms, consider two of the pioneers of industrial management: Frederick Taylor and Henri Gantt. While Taylor focused on the regularity and effciency of factory production lines, Henri Gantt focused on organizing the steps needed to complete a task. Gantt's approach is illustrated quite clearly by the Gantt chart, a visual organizational tool still used today. In a Gantt chart a project manager maps out multiple parallel jobs along a grid with a time-based axis, carefully timing each job in order to avoid slowing subsequent jobs that will rely upon its completion. Animation and VFX production represent a trend away from the Taylorist approach and toward Gantt, a move toward treating production like building a skyscraper or a steamship and away from turning out a uniform product in vast quantities. Gantt's approach is an early example of what would become project management.

The Project Management Institute defnes a project as being "temporary … with defned scope and resources."6 Thus, project management does not apply to constant day-to-day operations. Project management is also intimately linked with R&D. It emerged as a term in the early Cold War, alongside cognate concepts like operations research and systems engineering, as one of three "approaches to big technology."7 Brigadier Bernard Schriever came up with the now infuential concept of project management out of necessity in the context of the nuclear arms race. He was responsible for the US military's new Inter-Continental Ballistic Missile program, which was under extreme pressure to stay ahead of Soviet aerospace and nuclear advances. Thus, Schriever began thinking about how to facilitate technological development as fast as possible. Schriever's ideas became so infuential they caught the attention of Secretary of Defense Robert McNamara, who spread the principles of project management throughout NATO militaries. In the hands of Schriever, project management was not just a way of organizing labor for a specifc project, it was a meta-technology; it was a technology of how to create the conditions for technological advance. This goes a long way toward explaining why project management has become infuential in animation and VFX because, as the preceding chapters have established, studios in these industries have increasingly begun to support R&D and produce valuable technological properties.

In the 1980s project management was developed further by private industries outside of the military-industrial complex in felds such as automobile manufacturing and pharmaceuticals. From this work emerged the concept of "product development." Product development is a version of project management that focuses specifcally on creating a marketable product. One of the earliest and most infuential examples of this was the Toyota Production System (TPS). TPS is an umbrella term for Toyota's approach to management that includes such concepts as just-in-time logistics and total quality management. TPS was a key idea in product development because it considered the entire process of getting a product to market: from initial concept, research, and engineering, to manufacture and distribution. These dual concepts of R&D-focused project management and product-focused product development both provided the conceptual groundwork for changes in animation workfows in the 1990s and VFX workfows soon after.

The work of Steve Jobs is perhaps the most iconic example of technology-focused product development. Product development defned Jobs' glorifed return to Apple in 1997, which saw the company's value grow almost hundred-fold thanks to products like the iPhone. Before his return to Apple, Jobs was the majority owner of Pixar, where he also practiced his now famous approach to product development. When he bought the company, he had intended to develop a product that would make 3D animation broadly accessible, like desktop publishing.8 So focused was he on product development he insisted on spending scarce resources on the distinctive sculpted granite design of the Pixar Image Computer P-II.

Long-time Pixar CEO Ed Catmull was also heavily infuenced by Toyota's TPS philosophy.9 Some studios might hesitate to refer to their movies as products in public, but Catmull wears his product development mindset on his sleeve. In an article for the *Harvard Business Review* he writes,

People tend to think of creativity as a mysterious solo act, and they typically reduce products to a single idea: This is a movie about toys, or dinosaurs, or love, they'll say. However, in flmmaking and many other kinds of complex *product development*, creativity involves a large number of people from different disciplines working effectively together to solve a great many problems. The initial idea for the movie—what people in the movie business call "the high concept"—is merely one step in a long, arduous process that takes four to fve years.10

In some respects, the product development mindset has become ubiquitous in large-scale flm production. The high-concept flm, which integrates planning for ancillary markets, revenue streams, and corporate synergy, is a commonplace of conglomerated Hollywood. Yet Catmull is identifying another aspect of the product development mindset here. He is trying to communicate how Toyota-like his approach is. He is thinking about the process, from the original idea, all the way through every step of production. He is talking about refning the flm, about *developing* it.

A handbook on contemporary animation and VFX workfows and pipelines reads, "the main difference between factory goods and art is that art goes through a review and refning process."11 The point they are making is that animation and VFX production are a development process, just as Catmull says. Indeed, some management researchers have singled out these industries as examples for studying what they refer to as the "theory of managing creativity-intensive processes" (TMCP). Researcher Stefan Seidel, for example, has studied the VFX industry as a model for TMCP because of how "process aware" its production processes are.12

Project management and product development concepts from industries like consumer electronics and automotive have clearly had an infuence on the animation and VFX industries. And this infuence gives us a hint as to how principles for developing technologies and products have shaped these media industries. But these felds pale in comparison to the greatest single infuence: software development. Software development might seem like a self-evident concept. Isn't all programming software development? In fact, the concept has only been around since the 1980s, and it represents perhaps the most broadly infuential application of project management to date. During the 1960s, 70s, and 80s software industries were undergoing what is now referred to as the "software crisis," where projects had an alarming tendency to go over budget and to underdeliver. As software engineering histories tell the story now, the problem was the lack of a conventional process for how to build something. What steps to do you take? What are the best practices? Project management offered a way to make software-building more systematized and rigorous, like engineering. This was also the period when the term "software engineering" became popular as a way of describing an approach to programming that was rigorous and accountable. The dominant software development approach that emerged from this era is now referred to as "waterfall." Waterfall consists of discrete stages, each of which must be completed in turn: requirement analysis, design, development, testing, and release.

The waterfall development model puts emphasis on establishing the requirements of the client before getting into the detailed design stage. This ensures that a team does not spend countless hours and multitudinous resources building a product that does not do what the client needs. While this had clear benefts, it was not long until this gradual, careful approach to software development became at odds with the demands of private industry. In the 1990s, critics began to gather, and their key complaint was that the world simply moved too quickly for this approach. While waterfall was effective at producing a refned piece of software that did exactly what was needed, over its long development process "what was needed" could change. Thus, aerospace engineer Jon Kern and eleven other engineers and programmers conceived in 2001 to write a manifesto for a new approach to development: *The Agile Manifesto*. 13 Agile software development is focused on fexibility and responding to change. The idea is to get a product into a user's hand as quickly as possible, then to respond to the ongoing needs of the user through successive iterations. This is an approach to engineering steeped in the Silicon Valley ideology of entrepreneurial disruption. Rather than publishing a whitepaper at a stuffy conference, these engineers wrote a "manifesto" and published it on a website. The principles of responsive, reconfgurable fexibility that *The Agile Manifesto* espouses defne most of the contemporary private software development.

One can see the infuence of this way of thinking in recent changes in how software products are sold. Not so long ago, when you purchased a copy of Microsoft Offce or the latest game, you brought it home, installed in your computer, and that was the end of it. In the late 1990s ubiquitous connectivity meant that software companies could push revisions, in the form of "updates," over the internet. This was a key tool in solving problems like the Y2K bug, but it also opened the door to an agile approach to product release, where the version available on day one was not necessarily the fnal product. Today, more and more software products offered by companies like Adobe and Microsoft are shifting to a model where customers pay a monthly subscription for an ever-evolving product, known as "software as a service." This has economic advantages to be sure. Requiring constant connection to a server is a great way to combat piracy, and this model forces customers to buy the latest editions, rather than sticking with what they already have. One might expect that agile development would be incompatible with animation production, but in fact it has been transforming it for quite some time.

Pixar has made agile principles a key element of its public-facing management philosophy since the earliest days of agile development discourse, avant la lettre. One of the studio's favorite promotional anecdotes about the production of *Toy Story* 2 communicates their uncompromising commitment to product development and their fexible and responsive workfows. Due to their distribution agreement with Disney, Pixar had to make a sequel to *Toy Story* (1995). This led to the studio running multiple projects at the same time. Because of this increased level of activity, *Toy Story 2* (1999) reached a high level of completion before key decision maker John Lasseter had fully scrutinized it.14 When he fnally did, he decided it needed re-working and they made extensive revisions, despite being far along in production. Rather than planning a single vision of the flm at the beginning and seeing it through to the end, Pixar made a product part of the way, tested it, found it wanting, and went back to the drawing board. They were willing to iterate and revise. This story likely communicates their self-image more than actual practices, but it is still revealing. The point is to demonstrate the importance of building fexible structures that allow for iteration and revision, to build a workfow where it is possible to make changes at a late stage. These values have spread far and wide in the animation industry, and, signifcantly, in the VFX industry. This software development logic that Pixar championed can be observed spreading throughout the VFX industry in the 2000s in the design of their production workfows.

#### Organizing Production Workflows

VFX workfows both show how ingrained the logic of "dev" has become in VFX production since the early 2000s and, through their intimate link to pipelines, demonstrate how building connective infrastructure has become a fundamental part of production over the same period. The nature of VFX workfows has unquestionably been infuenced by Pixar's early example, but there were a variety of factors working in congress. As more people with software engineering and computer science backgrounds entered VFX studios, they brought these ideas with them. Even more importantly, the fexibility of these agile development principles responded to challenges VFX studios were facing as vendors who must competitively bid for studio contracts. Agility offered a way of living with the unpredictable demands of flm studio clients in what was becoming a ruthlessly competitive industry. Thus, VFX studio's use of agile principles are also the product of neoliberal economic conditions.

Since at least the early 2000s the VFX industry has been defned by an ultra-competitive bidding process.15 This process starts with the flm studio assembling a short list of VFX studios based on existing relationships, reputation, experience, and the VFX studio's show reel.16 This is a process that one VFX producer's handbook in 2010 likened to casting actors: certain vendors are suited to certain roles, and a studio-side VFX supervisor can judge their ft based on their past work.17 Once the studio has established a short-list of prospective VFX studios, they will ask for competitive bids from the vendors. Tax incentives have more recently become an important factor in bidding. A well-organized flm production will have a plan for what local tax breaks they are hoping to beneft from and the VFX vendor will have to be able to commit to employing a certain number of workers in a certain city.18 All of this adds to the complexity of the interaction between the flm studio and its VFX vendors, and as a result the need for fexible development workfows.

The combination of competitive bidding with the implicit need to respond to changing demands has been a key point of contention within the VFX industry. When Rhythm and Hues famously went bankrupt after the overages of *Life of Pi* (2012) many workers and VFX studios rallied around this complaint. More recently, the constant revision of the vendor's bid has become baked into the production process. Now VFX studios have a department that calculates and updates their costs with every new unforeseen development and challenge. The industry has, in other words, resigned itself to the reality of constantly changing demands and has developed more agile procedures to deal with it.19

Contracts between studios and VFX vendors will specify budgets and also delivery "turnover" dates, the specifc dates when the VFX vendor will turn over their fnished work.20 In the 2010s it became more common for studios to use early work for promotional purposes though.21 These sequences may not be the same as the fnal product that appears in theaters. Close analysis of trailers and features reveals how different they can be. For example, if you compare the frst trailer for *Guardians of The Galaxy* (2014), a flm co-produced by British visual effects studio Motion Picture Company (MPC), you can see a great many differences between the trailer and the fnal version shown in theaters.22 The fact that parts of the flm are completed and then revised demonstrates agile development thinking.

Once the studio and VFX vendor establish what they need for each shot, the VFX vendor can begin building the organizational infrastructure they will need for the job. While much of the in-house infrastructure, such as offce space and workstations will likely be the same from project to project, the studio will need to arrange many things before a project can start. For starters, they may sub-contract certain jobs to other VFX studios. At the very least the studio will need to hire workers on a projectspecifc six-month contract. Since the 2000s the norm has been for the number of workers to generally follow a bell curve, with few workers staying on from the very beginning to the very end.23 As one VFX producer's manual writes, a VFX unit "may spring into life almost any time during production or postproduction. Its life may be as short as a mayfy … or it may last several months."24

Even certain hardware infrastructure that was formerly in-house became more fexible and agile-friendly in the 2010s. While VFX and animation studios used to have vast cutting-edge server farms, now that can use cloud-based servers like Amazon Web Services (AWS). In 2011 VFX studio Zero VFX developed a cloud-based rendering tool tailor made for the industry called Zync. In 2014 Google purchased Zync and integrated it into their Google Cloud Platform. This service is noteworthy because it even takes over some of the software needs of VFX studios. You simply send them a V-Ray or Renderman project and they do the rest. The homepage for Zync reads: "Two things continue to be true in visual effects and rendering projects: schedules fuctuate, and the effort to get to fnal remains impossible to predict."25 Thus, even the basic infrastructure of VFX studios is becoming profoundly reconfgurable and reprogrammable as a way of responding to uncertainty.

It is the job of several workers to manage all the unexpected changes coming from the flm studio and to facilitate the fexible fow of content from shoots into the VFX workfow. Prime amongst these is the VFX supervisor. In the late 1990s and early 2000s the role of the VFX supervisor started to become more deeply integrated in flms' production. Going far beyond simply sourcing the "plates" (live-action footage) they would need from flm shoots, they started to become more involved in planning shots and solving on-site technical problems.26 Indeed, by 2010 it was not uncommon for VFX supervisors to work as second unit directors.27

The vast majority of VFX shots contain at least some content shot on a set or location and since the 2000s the variety and amount of data gathered seems to have steadily increased. The VFX supervisor and coordinators oversee this work, done by "data collectors" and "data wranglers."28 Plates will contain some information that needs to be kept, for example, an actress's performance, and some that needs to be removed and replaced by a composited effect, for example, wires from a special effects sequence. In addition to this, data collectors will gather information about the shoot like the lenses used, frame counts, fle format info, and pictures of sets and locations.29 It has also become the norm since 1999 to record ambient lighting using some sort of HDRI system.30 Additional data that a VFX vendor might gather today includes performance capture data, light detection and ranging (LIDAR) volumetric scans of sets, and even volumetric scans of performers and props.31

A production photo of any contemporary Hollywood blockbuster will reveal just how prolifc these forms of data collection have become. It is not uncommon to see a performer in a green leotard covered in motion capture tracking points, holding a green mandrill (stand-in object) in front of a green screen set. Though many of these processes have become routine, the types of assets and content a VFX vendor must manage are going to vary from project to project. They also are not going to arrive exactly when the VFX artists need them. This, again, requires an agile approach to project management. The vendor will have to be ready to receive these different types of materials, manage them, and integrate them into their production workfow. The VFX workfow is thus a complex, integrated, parallel process custom designed for each project to be able to respond to the changing demands of a client studio. It also involves a substantial amount of technological confguration for every project to respond to these contingencies. The software development and project management concepts used in designing this system model flm production as a development task.

VFX workfows have developed in complexity and fexibility since the early 2000s and they have very recently reached a point of agile development that challenges some of our most basic assumptions about flm production. Movie "development" has in fact already begun to resemble the "software as a service" models of Microsoft and Adobe. This may sound far-fetched at frst. Even if workfows involve extensive iteration and change, in the end a movie is a movie. One would assume you cannot send a movie to audiences, fnd out if it suits their needs, then go back to the drawing board. Recent events have demonstrated otherwise. In 2019 Sony Pictures released their frst trailer for *Sonic the Hedgehog* (2020), a VFX-laden family flm based on the character from the Sega video game series. The trailer received extensive negative reactions from the public, who hated the design of the titular character. This turned into a promotional nightmare for Sony, as negative reactions on social media became such a phenomenon, they were picked up the press.32 Sony responded by having their main VFX vendor MPC revise the character model and re-do the animation for the entire flm. This pushed their release date past the precious holiday season, but it turned a promotional disaster into a success, as their revision, which responded to the public's complaints, garnered further media attention. Sony tested their product with the public and revised it.

During the same release season there was a second case that pushed the logic of agile development even further than *Sonic the Hedgehog*. The flm adaptation of the popular musical *Cats* had a similar negative reaction from the public, though on an even greater scale. The production company Working Title had their VFX contractors, include MPC, tweak their animation in response in a production scramble that, according to the flm's director, saw them fnishing editing the day before the release after thirty-six hours of constant work.33 These last-minute revisions induced some new mistakes though, including unfnished animations. The studio then re-uploaded a second, fxed version of the flm a day later through digital distribution to theaters. The viewers who went to see *Cats* opening weekend thus saw a different flm than those who saw it after. Clearly flms have been released in different versions before. Even before special editions DVDs and director's cuts, flms were being redubbed for international audiences. But Working Title's use of digital distribution and its rapid release of fxes bear a striking resemblance to the agile distribution of software, suggesting this logic is continuing to spread.

The concept of development was born in the Cold War R&D complex as a way of outpacing other nations' technological advance. From there it was adapted to developing consumer products and software, and over time these organizational paradigms began to turn toward principles of fexibility and constant product revision. Animation and VFX have been using and contributing to these concepts since the 1990s, and VFX management has seen a particular intensifcation of these concepts in the 2000s as part of a post-Fordist response to the contract and bidding system. This discursive shift toward seeing media production and creative practice as "dev" has produced a situation where constant reconfguration is the norm in VFX, and that reconfguration entails constant engineering as an integrated part of production. This emphasis on constant redesign is also clearly at work in production pipelines.

#### Connecting Production Pipelines

The structure and organization of workfow would be impossible on its own without a technical infrastructure. This is where the production pipeline comes in. To borrow the phraseology of Bruno Latour, pipelines are workfows "made durable." Pipelines are mutable, they change with every project, and building a pipeline has become a major task on any large project.34 The VFX and animation pipeline are the reprogrammable infrastructure that allows workfows to be fexible and allows collaboration between different departments and workers though the exchange of assets, while also facilitating creative control on large-scale projects. In the case of large animation studios, the control is generally in-house, while in the case of VFX some of the direction comes from a flm studio. These workfows demand constant technological change. This is why VFX and animation studios are in a constant state of developing and reconfguring their tools and infrastructures, and this is why people in the industry so often confate workfows with pipelines; you really cannot have one without the other.

A pipeline facilitates the exchange of data by connecting the outputs from jobs to the inputs of other jobs. In other words, it allows workers to share assets between different departments. Once again, the industrial production line is a useful metaphor here: at its simplest, the pipeline is like a conveyor belt, moving the product from one department to another and spitting out a fnished product at the end. However, since the 1990s VFX and animation production pipelines have become far from simple or linear. Instead, they are like a conveyor belt that has numerous convergences and bifurcations that engineers can divert and reprogram.35 Every project also has particular challenges that require a specifc combination of software, plug-ins, and workers, and the pipeline facilitates the integrations of these parts. Programming and reprogramming such a fexible production infrastructure entails a great deal of technical work. This technical work becomes diffcult to distinguish from production work, as the selection, customization, and interconnecting of different tools is both the work of artists and of technical staff.

An important part of pre-production is fguring out what software is needed to make a given sequence. As Chap. 3 showed, this can entail developing new software from the ground up, but since the 2000s it more commonly entails choosing the right off-the-shelf software for the job. Once the pieces of software are chosen, it falls on the pipeline TDs (technical directors) to connect them to the pipeline and do any necessary customization.36 Sometimes software companies design their products to work with other programs. For example, there are many programs that are designed to work with Autodesk's Maya, because Maya is the central hub of most 3D animation work. Houdini, with its nodal workfow design, serves a similar hub function for nonlinear animation "FX." Sometimes though, a job will necessitate bringing together pieces of software that were not designed to be connected. In these cases, TDs and engineers may need to transcode fle formats and protocols and deal with all the subtle problems that arise from using custom scripts and programs. The construction of this connective infrastructure of the pipeline is mostly the work of TDs on feature flm sized projects. These are workers with extensive coding experience in different programming and scripting languages. Sometimes TDs are people who have worked in the industry long enough as artists to intimately understand the inner workings of popular software, but they can also be people with computer science backgrounds.37 Thus, there is a certain ambivalence between technical and production experience.

When you consider the complex and interweaving nature of workfows, you can begin to imagine how diffcult it is to build this connective infrastructure. A job may require inputs or assets from multiple other jobs, and the output of their work in turn may go out to multiple other workers. Since at least the late 2000s it has been standard practice to have several artists working on the same assets simultaneously.38 The metaphor of making a large building like a skyscraper is particularly apt here. As an animation pipeline manual puts it, "it is not uncommon for a single creature in a VFX movie to comprise hundreds, if not thousands, of individual assets that must be assembled to generate a working render."39 One can imagine scores of workers assembling a life-sized model of King Kong or a tyrannosaurus with cranes and scaffolds, with a hand or foot being delivered on a fatbed truck, like some sort of monumental construction project. All of this intricacy is done in the name of producing a project as quickly and effciently as possible, bringing a technological project (a flm sequence) to market on time for a scheduled summer or holiday season release. While large projects require extensive planning of these complex pipelines, even small-scale contemporary jobs require artists and technicians to thoughtfully plan their production pipeline.40

Allowing multiple parties from different studios to access or modify an asset like a character model in a hectic scramble is, of course, a recipe for confict. Several technical features of the pipeline are directed toward managing these potential conficts. The frst most important technology for organizing the inputs and outputs of different jobs is digital asset management (DAM) software. DAM software was frst implemented in the 1990s in television for twenty-four-hour news stations that had large collections of footage that they needed to be able to access quickly. In the case of VFX, the key function of DAM software is keeping track of versions and editing permissions, allowing many people to work with the same assets. As one paper on the subject from 2010 states, "Traditional DAM platforms are not even a consideration when it comes to providing workfow solutions in the digital media industries" because of the volume of data and the complexity of workfows entailed.41 Other techniques that facilitate simultaneous work include the use of "placeholder assets" and lowlevel-of-detail assets.42 This asynchronous approach to production became so elaborate in the 2010s it enabled some forms of "virtual production," where flmmakers could see low detail previews on set in real time.43

The complexity brought on by having many people work with the same content simultaneously is even further complicated by the iterative development approach to creative control, which sees creation as a process of refnement. It is easy to imagine how late changes happen on VFX projects. The above-the-line studio workers likely have a clear vision of what they want, but often they may not be technically versed, or able communicate that vision into VFX language. Industry manuals and best-practices guides acknowledge that late revisions are an inevitable issue.44 The idea of refning the product over time is also a core principle of the development mindset, as Pixar's story about the development of *Toy Story 2* in the late 1990s demonstrates. Imagine assembling an intricate sequence flled with hundreds of layered elements and having a director ask for one basic element to be changed. Pipeline design must be so fexible it can accommodate this approach to workfow. The integration of different elements is designed so that artists in one department can go back and change a single element without it adversely affecting all the cascading subsequent work that relies on it.45

Some of the challenges that workfow and pipeline design deal with are not new to flm production. A classic Hollywood studio flm was a large project that required the labor of scores of specialists. It seems obvious that studio flm productions throughout history must have employed some kind of workfow planning, even if it did not follow the concepts later formalized by project management or its offshoots. A review of major industry journals over the past hundred years using Mediahistoryproject. org's "Lantern" reveals little evidence of theorizing these challenges. Yet there is broad scholarly consensus that studios followed a Fordist factory model of effcient, regulated output.46 Thus, what really differentiates contemporary approaches to production is their zeal for the post-Fordist effciencies and potential competitive advantages offered by developmentminded project management approaches like product development and agile software development.

This quest for agility required that technological development be a part of production. Being able to re-program the pipeline for every different project allows for fexible workfows. VFX studios re-ft the factory for every job, even during the job. It is true that some things stay the same. VFX studios employ some full-time staff, and there are permanent buildings, workstations, networks, servers, and so forth. This is the stuff of frst-order infrastructure. Though, even these forms of solidity are evaporating through trends like increased sub-contracting and cloud-based rendering. As flm production becomes more agile it is likely to resemble software development more and more.

All this fexible project management has of course had immense and mostly negative effects on labor practices in the flm industry. As John Caldwell and others have noted, the expanded role of VFX and general post-production has destabilized many traditional production labor roles.47 The intricate and fexible way VFX studios connect their workfows to flm studios has enabled the expansion of the shift away from the once dominant studio system to a competitive bidding system, which has in turn eroded labor unions.48 This is connected to what Toby Miller refers to as the "new international division of cultural labour," where international cities like Vancouver, Toronto and London compete with everincreasing tax incentives to lure studios.49 Hye Jean Chung notes how the "nonlinear" nature of VFX pipelines facilitate this internationalization trend.50 As Michael Curtin and John Vanderhoef write, many simple VFX tasks like wire removal can be done by "a couple guys in a garage in Van Nuys or a small shop in Chennai."51 Although studios often cast agile workfows as a feature of their commitment to producing a refned fnal product, or technological advance for its own sake, these approaches to production organization are inseparable from race-to-the-bottom political economic and employment practices.

#### Scripts, Plug-ins, and Programs for Nonlinear Animation

If workfows and pipelines demonstrate a pervasive trend toward using "development" principles that collapse production and R&D, the particularity of nonlinear animation production is an intensifed case of these same trends. Nonlinear animation is constructed by studios as a special type of production that entails the deep integration of technical work. If you look at a fow chart of VFX or animation workfows that include things like modeling, rigging, lighting, and rendering, nonlinear animation has its own special branch, often referred to as "FX" or as "technical animation."52 These FX departments do not make animations; they make simulations that make animations. Getting a certain phenomenon to look a certain way, the gathering of a character's clothing, for example, or the splash of a turbulent sea, can require buying new software, writing new scripts, developing new plug-ins, or even writing new simulation software from scratch. All this work is done to build a technical apparatus for automated animation.

Nonlinear animations consist of technological and organizational confgurations designed to manage unpredictability, just like workfows and pipelines. As Chap. 2 established, the genealogy of nonlinear animation is rooted in attempts to predict and manage unpredictable systems like the weather or fnancial markets. A closer look at this form of animation reveals how FX artists and TDs build technical apparatuses to enable fexible and reprogrammable control, and how the work of animating and engineering has been collapsed into a single undifferentiated "dev" task. Indeed, the jobs of "Senior FX artist" and FX TD are practically interchangeable.53 This suggests that while there are still offcial divisions between technical and artistic work, in practice they are one and the same.

Like other VFX and animation tasks, the frst step in nonlinear animation preproduction is planning exactly what software, people, and infrastructure a studio will need to achieve the desired look for a shot. In the late 1990s and early 2000s high-quality nonlinear animations were relatively expensive to produce and required more basic technological development. For situations where there was only one brief shot, it may have been easier to simply fake it using composited libraries of footage.54 As software got better over the course of the 2000s and 2010s, there was generally a spectrum where lower budget projects could be handled with a combination of off-the-shelf software and minimal customization and more spectacular or photorealistic high-budget projects involved high levels of customization and building new software.

Throughout most of the 2010s every nonlinear effect in a feature flm or TV show would have been made up of several different effects combined. For example, animating a stormy ocean required animating the larger-scale fow of waves, the smaller-scale turbulence and splashes, the foam braking off the waves, wind effects, and so forth. All of these are specifc simulations in their own right. FX artists refer to this combination of effects as the *master FX recipe*. 55 SideFX's Houdini was, and is, the most popular core software for building an FX recipe because of its nodal pipeline design.56 Starting in 2009, SideFX also started making its own collection of nonlinear animation effects with fuid, particle, rigid-body dynamics, fur, cloth, fre, and smoke solvers that work natively in Houdini. A low-budget FX job in the 2010s might only call for a one-stop-shop suite like this, and indeed these types of solutions have since become conventionally quite acceptable to use on most projects. Using off-the-shelf suites like the one sold by SideFX dramatically cuts costs. Buying new software is expensive, not just because it needs to be built into the pipeline, but also because workers will need to be trained on it.57

The next step up in complexity and cost for building an FX recipe would entail sourcing different third-party plug-ins to achieve a more customized or photorealistic look. Plug-ins range from being relatively simple tweaks to being sophisticated nonlinear physics simulations. They might add certain kind of spray to ocean waves, for example. Anyone who has used an internet browser or word processor should have some basic understanding of what a plug-in is, but it is worth taking a moment to consider the defnition. A plug-in is a kind of modifcation that adds functionality to a piece of software. The difference between software and plug-ins is that a software program can run on its own, without being built into something else. Without the framework to accommodate plug-ins, the modifcation of software would be diffcult, and in some cases illegal. Software like Houdini is designed to be as fexible as possible because there are so many different possible modifcations for different jobs.58 The more readily these programs can accommodate plug-ins, the less labor needs to go into building the pipeline and therefore less money needs to be spent. Many software companies make their products plug-in friendly because they allow third parties to expand the functionality of their software and thus drive more consumption of the core product. This is, again, an example of how agile workfows are facilitated by fexible and reconfgurable technical infrastructures.

Programs like Houdini can also connect to other independent simulation programs, like Next Limit Technology's program RealFlow. When different programs do not work well together, the FX TDs must build their own custom pipeline infrastructure. TDs refer to this as writing "glue code." The more customized the job is, the more elaborate and customized the pipeline infrastructure will be. Thus, the logic of plug-ins is intimately linked to that of pipelines.

The next step in technical complexity beyond employing plug-ins is writing scripts. This is the sort of thing done by the more experienced FX artists and FX TDs. Much like plug-ins, scripts can only run within a program. By contrast, a program runs on its own. In other words, programming is writing instructions for the computer, while scripting is writing instructions for a specifc program. Being able to write scripts requires understanding the language a program uses. For example, Autodesk Maya's script editor console uses their MEL scripting language, but in 2007 they added the Python language, which is vastly better known. Artists and TDs might use scripts to automate something to improve work effciency, like combining several repetitive jobs into a batch to eliminate the need to do them one by one. This sort of effciency work is all done in the name of minimizing the amount of clicks an artist must make to do their work. Thus, work is done faster, or with fewer people, and profts are maximized.59 But scripts can also be used to manipulate the automation of nonlinear animations, accessing a level of customization not available through the graphic user interface.

The distinction between programming and scripting is important to understand, because while script writing is a common practice, programming work is generally only done at the largest studios. As one TD and former FX artist told me, "Modifcation of scripts or creation of plug-in is pretty usual. Software change requires foresight about what your need will be in the future."60 Script writing also demonstrates that the line between developing tools and using tools is blurry. At a certain point, the quotidian work of script writing becomes so complex that it becomes an entire plugin.61 And in essence every customization of software is technology development. This blurriness is refected in labor roles, as in the case of the interchangeability of TD and senior FX artist titles. One might expect there to be a strict division between technical and artistic roles, but this is clearly not the case. Making an image and developing a technology are indistinguishable.

These blurry lines notwithstanding, the scale of tool development clearly tracks closely with the size of VFX and animation studios and their projects. Developing software from the ground up requires immense foresight, planning, and resources.62 The largest studios do the most fundamental technology development. As one FX TD explained, they do this because of the "immediacy and customizability" provided by in-house software.63 Having the people that made the software down the hall makes service immediate and makes getting the exact image the director wants easier. Furthermore, as Chap. 3 noted, there are immense strategic and economics advantages to developing and owning proprietary technologies.

Sometimes software companies themselves offer custom services. There are also some VFX studios that specialize in just one type of effect and even one piece of simulation software. These studios defy categorization as either software developers or production studios. The best example of this is Fusion CI Studios, a Vancouver-based company founded in 2004 that specializes in RealFlow software. Fusion CI does extensive R&D work and developed its own specifc fuid simulation that operates within RealFlow, called Smorganic, that specializes in animating the ultra-thin sheets fuids make when they splash. Fusion CI models itself as a "plug and play" company, which can be brought on for a specifc job, bringing its own artists and technicians, and attaching itself to the greater VFX pipeline and workfow. This approach makes economic sense for studios that have important fuid simulation scenes to do, but do not have the operational scale to justify, or indeed fund, extensive R&D. Fusion CI's hybrid role once again demonstrates how indistinguishable technology development and animation are in nonlinear animation, and how modular and reprogrammable production workfows can be.

As a way of demonstrating how even the most basic nonlinear animation blurs the line between animation production and technical work, I will describe a hypothetical case based on my own practice learning how to make fuid simulations and experimenting with different pieces of software. A case like this one demonstrates the confation of image making and technological development just as well as a large-scale project, not because of the particular combination of custom software or coding it involves, but because of the way the user controls animated phenomena through the manipulation of parameters. The user (an FX artist or a rank amateur like myself) builds a fexible technological apparatus to manage unpredictability the same way a complex pipeline does. For this example, I will use the 3D animation software Maya, two plug-ins for Maya called Krakatoa and Nuke, and RealFlow, a fuid simulation program that outputs to Maya using a plug-in. This example may not represent the most cutting-edge work done at large VFX or animation studios, but it does offer a basic and general account of what using this type of software is like.

Our hypothetical nonlinear animation job starts in RealFlow, where the artist makes a particle-based simulation of a fuid. The frst steps will likely involve putting in any boundaries, containers, or objects that the fuid might splash off of. Next, the artist inserts the fuid, either as something already present or as something fowing out of what is called an *emitter*, like a pipe or an overfowing bathtub. The FX artist can alter the size, direction, and amount of fow from an emitter by changing different values either in a script or more likely in a tool-specifc user-interface window. At this point the artist can insert different forces into the fuid over a timeline, which will cause perturbations, vortices, and movement. They can also potentially add random noise, using a stochastic algorithm to make the movement more interesting and naturalistic. They might also adjust the force of gravity. At this stage, the artist can also change the proprieties of the fuid, such as the vorticity (how many swirls the fuid forms) or the viscosity (how thick the fuid is). All of this is done by changing the value of a given modifer. It is important to emphasize here that these are all pre-programmed conditions. The artist cannot directly shape the fuid, but instead manipulates parameters. With any adjustment they will have to run a low-level-of-detail simulation to see what the outcomes of these conditions will be.

At this point the artist has made a fowing volume of particles. The next job is to draw a mesh onto the particles. Particles are like a volume without a surface and adding meshes gives the water a surface. With the polygonal surface of the fuid drawn, the FX artist can teak more values, such as the thickness of splashes. It is also common practice for the animator to make a second particle simulation that will stay as particles without a mesh. These little points of water will act as mist droplets.

Next, the artist outputs the simulations to Maya. Here, lighting and camera position can be set, as they would be with any animation, although another department may do this work. The artist will also give shading and refective properties to the mesh surfaces, as well as surface textures and coloring. In the case of water, the surface will obviously be transparent. The artist can also change the look of the secondary particle simulation using the Krakatoa plug-in, giving the particle points shade or color. Finally, the two simulations will be put together and composited into a scene with other elements using Nuke. This is a relatively simple example of how a FX artist would go about making a simulation. One person with a few thousand dollars' worth of technology could do it. In 2020 the open-source animation suite Blender fully integrated an FX framework called Mantafow that combines all of this functionality, so a simpler version could even be done with only one piece of free software and a consumer-grade computer.

Although the work described here sounds, and indeed looks, not unlike the work of any digital animation artist, there are some important distinctions to be made. For one, the artist cannot directly control the outcome of their simulation. The best they can do is use trial and error and make choices based on their own experience with the behavior of a given simulation. Further, the artist is using nonlinearity, and even adding additional randomness, as an important part of achieving the right look. The FX artist seeks to foster unpredictable complexity as a resource while also shaping it to conform with direction. The FX artist is thus building a technical apparatus (the simulation) to control some unpredictable system. The principles are the same as those shaping workfows and pipelines in general.

The nature of this work raises a curious theoretical question that brings us back to a topic addressed in the second chapter. Every time an FX team uses a different plug-in or modifes a simulation, they are re-inventing their representational apparatus. They are adopting a different way of seeing the ontology of the fuid (or hair or smoke, etc.) by engineering a different "solution." Imagine if flmmakers re-invented the camera every time they made a flm. As Chap. 2 argued, there is an exciting potential in representing the world through such contingent, speculative means. Thus, although the turn toward conceptualizing production as "development" generally takes the shape of treating production work as technical problem solving, this does not tell the whole story. In order to understand the work of this new breed of creative industries worker we should borrow a page from the philosophy of engineering and recognize that the epistemic value of media is not always in the "knowing that" but also in the "knowing how."

#### Recasting Technical Labor

Historically it has been common for creative production work and technical work in flm industries to be constructed as separate categories. The title of an organization like the Academy of Motion Picture Arts and Sciences suggests that the two sit close together, but in practice the academy tends to compartmentalize technical work. The annual Scientifc and Technical Awards are a mere sideshow when compared to the main Academy awards. The VFX and animation industries have been slowly renegotiating this division though. When organizations like the Visual Effects Society describe the work done by their members, they liken them to Renaissance artists, invoking a period when artists employed sophisticated techniques informed by their knowledge of light and physiology to produce "realistic" images.64 Animation studios have a similar rhetoric, styling the extensive engineering work they do as a form of creativity. A traveling exhibit put together by DreamWorks and the Australian Centre for the Moving Image features numerous displays of how their artists solve "creative and technical challenges," including one (sponsored by the computer hardware company HP) that allows patrons to manipulate a fuid simulation through an interactive display.65 Pixar also likes to emphasize the way Walt Disney fused the "magic" of animation with technical innovation, and they construct themselves as continuing this tradition of technological creativity.66

Digital animation studios such as these have always put extra emphasis on the creativity of their sometimes very technical work, because from the outset they have had to make the case that seemingly rigid and lifeless computer graphics can be used to make cartoons. Pixar's use of their shorts is an excellent example of this. *Luxo Jr.* (1986), their frst short after they split from Lucasflm, features rigid looking desk lamps brought to life through careful manual manipulation of their gestures, in the Disney animation tradition. Christopher Holliday argues that the Luxo character is a "synonym" for the "animatedness" of digital animation, and thus the desk lamps have become an enduring part of the studio's brand, featuring prominently in their logo and the entrance to their studio.67 One might therefore expect that nonlinear forms of animation, which are made through engineering an automated simulation rather than manual keyframe techniques, would be something Pixar might downplay. But rather than avoiding animation work that seems too technical, they promote the creative quality of this technical work. Their 2016 short *Piper* shows the way they treat this highly technical form of animation as yet another form of animated creativity.

*Piper* tells the story of a sand piper learning to hunt for food in the everchanging landscape of an ocean shoreline. Following the paradigm of Pixar shorts, the animators communicate an incredible amount of storytelling through the subtle character animation of gestures and facial expressions. The birds, though relatively naturalistic, convey a range of emotions that are universally intelligible to humans. These expressions are the result of work that requires painstaking manual labor done by people who ft our tradition defnition of what a key-frame animator is. Yet *Piper* also abounds with nonlinear animation. The feathers, a key expressive part of the birds, automatically ruffe in the wind and react to movement. The feathers are also bound to the deformable movement of the skin of the birds, which is connected in turn to a simulation of musculature.68 Thus, while the bird's core model is manipulated manually, the overall animation of the bird consists of at least as much simulation as manual animation.

The most impactful aspect of *Piper* is arguably the way it renders the material experience of being small. The tiny waves seem huge, the grains of sand are more like pebbles, and blades of grass are the size of trees. The animated material quality of all these things is the result of an imaginative use of simulation, from the fow of the grass, to the crash of the waves, to the way the sand moves as the bird tumbles across tiny dunes. These simulations are not self-evident, and they are not easily achieved. They require imagination, picturing one's self in the world on a different scale. This vision would have informed the building of the FX recipe and pipeline infrastructure. TDs would have customized and altered certain tools, FX artists would have carefully manipulated different parameters, wrote scripts, and created many different iterations, all in the name of arriving at this fnal product. All of this is not evident when you watch the short, but what is evident is the artful way the artists have shaped the material world. *Piper* is legibly a techno-artistic feat.

*Piper* demonstrates very keenly how the manual and the automatic have been renegotiated to style technological work as creative work. Given the discursive importance Pixar shorts have, we can get a glimpse of how the studio is renegotiating these ideas in this short. The technical work of simulation building is being subsumed into the image Pixar has worked so hard to cultivate all these years as a fount of creativity in the tradition of Disney animation. Much in the way Pixar originally used their shorts to convey that 3D computer graphics are genuine animation, they now convey that simulation is animation. The job of making a crashing wave look just right has been elevated to the same level as the job of animating the expressive gestures of an animated character.

This new image of creative work in animation raises some questions about the construction of labor roles. An important subject in the feld of production studies concerns the subjectivity of the self-identifed creative worker, and the role the discourse of creativity plays in organizing labor. Vicki Mayer notes how the attributes of creativity and professionalism are used to create hierarchies in media industries, above-the-line and below. Above-the-line are the professionals who manage workers and the creatives who have control over the content being produced: labor constructed as intellectual or creative. Below-the-line are the technical and service workers.69 Similarly, John Caldwell is interested in understanding the socio-cultural factors that make possible the current state of the industry, where workers log long hours for little, or sometimes no, pay. He fnds that there is an "invisible economy" of "symbolic payroll," where workers are motivated by discourses like creativity instead of material compensation or job security.70 The idea of creative work makes possible the state of precarious "deprivation" employment practices in industries like VFX and animation.

If the idea of creativity is so important for organizing labor, and if, as Mayer argues, it follows a division between technical trades and creative or management roles, what happens when both creativity and technical work are cast as development processes? Is the permeability between FX artists and TDs evidence that these labor divisions have been disrupted? A key fnding of Mayer's is that below-the-line workers see themselves as making creative contributions to the production of media, but from the outside they are invisible and excluded. Mayer writes, "all of us increasingly defne ourselves through our productive work while at the same time industries devalue our agency as producers."71 It is exactly this dynamic that makes it possible to beneft from the motivating discourses Mayer and Caldwell describe, while at the same time having labor spread across many contracted companies scattered throughout the world. Caldwell categorizes production and post-production work below-the-line, yet he also demonstrates how the workers in these categories are strongly motivated by the discourse of creativity and the symbolic payroll. He observes that low-level VFX workers work so hard in large part because they want to imagine themselves as artists who are a part of the movies they love.72 Even if you only did some match-moving work on Jar Jar Binks in a scene that ended up being cut, you still worked on a *Star Wars* movie.

Nonlinear animation and R&D laborers are in no danger of being recognized publicly by the industry as valued creative workers, even as technical work becomes indistinct from creative work. VFX Studios, supervisors, and organizations repeat the same refrain with surprising consistency in public communications: our job is to make the director's vision come to life.73 *The VES Handbook of Visual Effects* writes that VFX supervisors (the highest ranking VFX workers) take "artistic desire and turn it into a technical plan."74 The role of VFX supervisor is truly commensurate with director of photography or art direction, yet they continue to lack recognition in the most visible places, like the Academy Awards. The only pushback against this has come from labor organization initiatives.

The integration of more engineering and R&D work into animation and VFX production has in fact ensnared more workers into the symbolic payroll. Academic nonlinear animation researchers revel in their association with the flm industry. At the very least, association with Hollywood seems to be a good way of promoting your work. Evidence of this can be found in profles on researcher's personal websites and blogs, on offcial university webpages, and, of course, in SIGGRAPH presentations. Take, for example, a scholarly publication by Jerry Tessendorf (a researcher profled in Chap. 3) and several other scholars at Rhythm and Hues, which was presented at SIGGRAPH and can be accessed both through Tessendorf's personal website and through his university page.75 The paper concerns a new technique for animating realistic clouds. This research was conducted at Rhythm and Hues for a specifc project: the flm reboot of the 1980s television show *The A-Team* (2010). The title of this peer-reviewed research paper is *I Love It When a Cloud Comes Together,* a play on the famous catchphrase from the show "I love it when a plan comes together." The researchers seem to be playfully suggesting an analogy between their work and the work of the A-Team: a scrappy squad of underappreciated misfts who always get the job done. It seems quite clear that researchers enjoy being a part of making spectacular movies. It no doubt differentiates them from their peers in other felds. How many mathematicians have Academy awards? While these valued scientists at the forefront of their feld probably are not exactly exploited by this symbolic payroll, this is a phenomenon that suffuses networks of graduate students and more precarious academic laborers. To reiterate Mayer's words, "all of us increasingly defne ourselves through our productive work while at the same time industries devalue our agency as producers."76

There are several convergent and related causes for what could broadly be described as the development turn in VFX and animation production. First is the spread of the logic of R&D from the institutions of the militaryindustrial-academic complex to the flm industry. The fact that VFX and animation studios have invested so much into R&D, and that R&D has become an important strategic and economic factor, has had a long-term effect on the role of technology in production. The military's R&D complex is also where the concept of project management was formalized and spread in the frst place. Second, the spread of nonlinear animation tools, which complicate the relationship between automation and animation, has had a practical effect on the nature of production work. Tools such as these have made writing scripts, installing plug-ins, connecting software pipelines, and even sometimes writing new programs, an everyday part of animation work.

On their own, these two important conditions explain much of these trends in production, but they do not necessarily explain why trends like fexibility, confgurability, and customization have become so important. Why did agile project management become so much more popular than waterfall? Why have VFX and animation studios borrowed these principles? They were responding to economic trends that seek capital productivity and effciency in neoliberal and post-Fordist principles, which introduce competitive market forces to every facet of operation, making every flm production a nesting-doll of contracted and sub-contracted vendors that in turn employ workers on six-month contracts. A turn toward confating cultural work with computer engineering could also be seen as a by-product of the "information society" discourse that proliferated in this neoliberal context, because it sees culture as nothing more than information.77 All of these conditions are intimately linked. The rise of R&D in the flm industry was spurred on by the shift from a Cold War federal funding model to a tax-incentivized private model. Thus policy, economics, discourse, and technology all feed into each other, with no single factor offering a suffcient explanation on its own.

The concept of R&D took experimentation, exploration, and discovery and modeled it as a process that could be managed and instrumentalized without compromising its productive unpredictability. Nonlinear simulation sought to model unpredictable processes so that they could be analyzed and reproduced. Nonlinear animation uses these principles to animate the unexpected, random, and complex nature of natural motion, while also being able to artistically manipulate it. There is an epistemic paradigm specifc to this period in history, an episteme, which joins these ideas. Chapter 5 will pursue this concept further, adding greater nuance to certain assumptions about post-Fordist management techniques, using the example of Pixar.

#### Notes


Thompson, *The Classical Hollywood Cinema: Film Style & Mode of Production to 1960* (Routledge, 1985), 92–95.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

### Animating Management

How does one sculpt water? It is an absurd idea, like trying to nail Jell-O to the wall. Yet a considerable amount of contemporary computer animation entails making and sculpting such unruly and unpredictable things: snowstorms, clouds, fre, hair, cloth, water, and, in fact, Jell-O. Developing tools for this type of animation has been the subject of considerable effort by large studios and software companies for decades. One can essentially take each major animated feature from Hollywood studios like Walt Disney Animation and Pixar Animation from the past few decades and single out a key example of this type of animation being put on display in each flm. *Monster's Inc*. (2001) and *Brave* (2012) prominently featured hair and cloth animation by iterations of Pixar's "FizT" software, *Frozen* (2013) features Disney's "Matterhorn" snow simulation software, and *Moana* (2016) features water animation powered by Disney's "Splash" fuid solver. These types of animation all require the creation of an unpredictable, chaotic type of motion that can be shaped by manipulating parameters without losing its uncanny naturalistic quality. The same techno-scientifc concept of creating something unpredictable and shaping it, without losing its ineffable quality, is also at work in the way large contemporary animation studios represent their approach to management to the public. Public relations and promotional representations of studio workspaces and work cultures are full of examples of how they create conditions for unpredictable things to happen and occasion the unexpected as a part of the creative process.

Both these types of animation tools and these management concepts hinge on the idea of modeling and simulating nonlinear systems, using random number generators or dynamic interactions to simulate unpredictable behavior in systems. Since the military-industrial-academic complex of the Second World War, computer simulation has been used by a variety of disciplines to understand various unpredictable and dynamic phenomena like weather patterns or fnancial markets. A key application of this concept has been management science, which uses simulation to design resilient systems for an unpredictable world. While management science is part of a long tradition of industrial management theory, it represents a different era in the epistemology of management as compared with the scientifc management practiced by the likes of Fredrick Winslow Taylor, which was infuential to animation management in the early twentieth century, as work by scholars such as Donald Crafton have shown.1

The linkage between nonlinear animation and management can be observed across contemporary animation, visual effects, and interactive game production, but this chapter is limited to the case of Pixar. Pixar's founders were all computer graphics research pioneers who worked on some of the earliest applications of nonlinear simulation for computer animation. Pixar invests considerably more effort into promoting its management theory than its contemporaries. The studio trumpets its approach to management through popular books, business journal articles, DVD extras, and behind-the-scenes public relations campaigns. These materials shape Pixar's particular corporate ethos for its audiences, its employees, its software customers, and potential investors. As a Bay Area tech company that began to model itself as an animation studio, Pixar carefully sculpts its corporate identity as innovative yet in tune with animation history, and the concept of nonlinearity plays an important role in negotiating this hybrid identity.

As work by Nicholas Sammond fnds, early industrial animation studios such as Bray Productions, Fleischer Studios, and Walt Disney Productions sought to emphasize the unpredictable liveliness of creativity at their studios while also demonstrating their ability to control and manage that unpredictability through Taylorist and Fordist industrial management techniques that promoted regulation and effciency.2 Pixar similarly seeks to promote their creativity and industrial effcacy, at once maintaining a connection to studios like Disney while also dispensing with restrictive, routinized aspects of industrial management and promoting the neoliberal Silicon Valley entrepreneurial myths of their origins as well as their post-Fordist management techniques. Their nonlinear approach to animation negotiates this complexity and promotes a harmonious, uncomplicated, and fun vision of creative labor. Pixar represents creativity as the result of unpredictable interactions and random processes, and they promote the way this randomness and complexity can be cultivated and directed through the manipulation of conditions and parameters. Unpredictability can thus be integrated into the production process by managing the conditions that generate it. Pixar renders management as a form of animation, as a way of enlivening and occasioning the unexpected, and they render animation as a nonlinear simulation task.

In Siegfried Zielinski's words animation is an "interdiscursive phenomenon," with a diverse genealogy of different meanings grounded in the concept of introducing a life force to a material body.3 Through history this vital animating force has been many things, including a soul, a spirit, or electricity. In the case of Pixar this animating force is nonlinear contingency. This is what makes nonlinear animations like the splashing water in *Finding Nemo* (2003) look so uncannily lifelike, and it is also what the studio sees as a key animating component in how they produce their innovative flms and technology.

Pixar's algorithmic way of thinking promises to resolve the tension found in early industrial studios between creativity and control. By treating management as a computational tool, by approaching management like they approach the simulation of splashing water, Pixar further promotes a vision of neutral technocratic management that elides issues of labor justice and equity and promotes an image of the studio as a utopic harmonious space that fosters creativity. This is a vision of control that grew from a nexus between animation, technology, and management theory, and it utilizes a similar logic observed by scholars such as Tarleton Gillespie, Cathy O'Neil, and Safya Noble where algorithms and software platforms are constructed as objective and neutral in such a way as to obfuscate politics.4 Understanding this helps to explain how Pixar has dealt with labor issues, including recent cases of workplace harassment, but this research also provides insight into broader trends in creative management that Pixar has inspired.

Pixar is often made to represent the introduction of Silicon Valley neoliberalism (and related post-Fordist labor practices) to the American flm industry. Former Pixar CFO Lawrence Levy styles the studio as "bringing Silicon Valley bravado to Hollywood."5 Paul Flaig describes the studio as a "vanguard" of post-Fordism and as "representing a manifold turn from Hollywood's flm factory to Silicon Valley."6 The post-Fordist approach to managing creative labor in this context is defned as the erasure of the line between private life and work life and by the autonomy (and precarity) of the self-managed contract worker or entrepreneur.7 While Pixar's management theory conforms closely to the former, it deviates in some interesting ways from the latter. As the following pages will explain, seen through the frame of nonlinearity, creativity is not a fugitive, ephemeral resource to be captured. Rather it is something that can be generated and directed through the manipulation of conditions and parameters. Thus creativity is something that is fostered in-house rather than being contracted out.

#### Scientific Management and Early Industrial Animation

When industrial management techniques were frst introduced to the craft of animation in the 1910s they brought about certain points of tension; the rigid and routinized style of industrial management had to be reconciled with a discourse of chaotic, unpredictable creativity. On one hand animation studios promoted their ability to bring things to life, creating unpredictable situations and characters with a mind of their own, one the other hand, yet at the same time they accomplished this feat through a highly routinized, deterministic, and linear process. Scott Bukatman observes this dynamic in both animation and early American comics, and he attributes it to the historical context of turn-of-the-century America, where an ideology of opportunity and self-determination existed in tension with the highly regulated realities of industrial labor.8 This dynamic between creative chaos and management is still relevant in contemporary animation industries. What has changed is the conceptual framework used to understand it.

The industrialization of early animation coincided with the ascendance of scientifc management, which approached the management of labor as an engineering task.9 With a focus on output and effciency, scientifc management employed empirical and technical analytical tools to understand labor tasks and improve worker effcacy and effciency. Scientifc management is associated with a few key fgures from the early twentieth century such as Frank Bunker Gilbreth and Henri Gantt but the movement has become synonymous with one key fgure, Fredrick Winslow Taylor, the author of the seminal 1911 book *The Principles of Scientifc Management*. The idea of applying "science" to the management of labor is a somewhat vague concept, more indicative of the desire for epistemic authority and a declaration of alignment with a positivist zeitgeist rather than any specifc set of methods or theories. Yet the title of "science" is more appropriate when one considers Taylor's key contribution to the history of management. While he invented a number of concepts and techniques, his fundamental contribution was the idea of examining the practice of management itself: to theorize the role of management and make it more than the mere oversight of workers.10 This was the study, or science, of management. While new approaches to management that focused on hierarchies, bureaucracy, and psychology would replace scientifc management in a relatively short period of time, this refexive disposition toward management would be Taylor's enduring legacy. Theorization of management would have an important infuence on animation production over time, both in terms of how animation production processes were designed and in terms of how the process of animation was understood and represented.

Although the frst forms of cinematic animation were largely the product of craftwork by individual artists like Winsor McCay or Émile Cohl, by 1913 cinematic animation started to take on elements of industrial production. During the following years studios like Bray Productions and Fleischer Studio began implementing techniques and technologies in the name of improved consistency, effciency, and output. Examples include the use of celluloid, registration pegs, rotoscope, standard production references, and the emergence of specialized below-the-line labor roles like inkers and inbetweeners.11 Donald Crafton's research offers substantial evidence that John Randolph Bray in particular was engaging Taylor's ideas and applying them to animation.12 Animation studios were thus studying the animation process and refecting on how it might be done differently with the implementation of different confgurations of labor and technology.

Evidence of this management theorization can be seen both in the flms produced by these studios and in promotional communications about the studios. In the case of promotional materials, Sammond's research on fan magazines and trade publications fnds that studios like Fleischer's were eager to show off their "rationalized" and effcient industrial animation processes. This should not be taken to mean that they wanted to look mechanical and boring though. Sammond notes that while the earliest representations of the task of animation by Winsor McCay represented animation as an onerous grind, as studios became more industrialized they tended to represent the process of animation as being more "playful and capricious" while also being more effcient, "rational and productive."13 The studios were interested in showing that their workers were lively and fun, but also that the studio producers and managers were able to "corral the rambunctious energy of its animators."14 The promotional material portrayed this relationship not as one of animosity but of paternalistic guidance. Sammond further fnds that the complexity of this relationship between unpredictable liveliness and managing direction is more obliquely evident in animations from this period, like the Fleischer Studio's *Out of the InkWell* series*.* Here the rebellious and mischievous animated character Koko the Clown is contained and managed by the live-action animator that drew him.15 Sammond fnds that "animators created a commodity that appeared to speak back to its creators and assert its independence from the social and material order of its making … only inevitably to be put in its place."16

This transitional period of early industrialized animation eventually gave way to more settled and conventionalized portrayals of animation production, typifed by Walt Disney Productions. Here the work of creating an animated flm was no longer portrayed as the manual work of drawing, but as the industrial management task of directing an extensive studio with numerous departments of specialized workers.17 A promotional flm that followed the release of *Snow White and the Seven Dwarfs* (1937) titled *How Walt Disney Cartoons Are Made* (1939) offers a good example of this. The flm portrays Walt Disney overseeing the studio's many departments, directing the inking department, flled with "hundreds of pretty girls in a comfortable building all their own," as well as the story department, where writers and the studio's "hard boiled directors" develop ideas. The flm is careful to note that although many people work on a given flm, Disney always gives direction and fnal approval. The management of labor, particularly its division, thus affords creativity. The "pretty girls" are compartmentalized, rendering their labor as rote, menial, and linear. Having someone or something to do the repetitive work of animation exactly as directed allows animators and managers to do the creative work. Animation is rhetorically transformed into management by Disney: the people in charge are the creators, and the ones that do the work merely follow directions.

Like Walt Disney Productions, Bray Productions, and Fleischer Studios, Pixar has sought to convey its management theory through public relations and promotional material. Pixar's representations of management also seek to negotiate this dynamic between anarchic liveliness and control. However, these concepts are mediated through a very different epistemic frame. While early studios like Bray Productions and Fleischer Studios focused on management as routinized, effcient, and rigid, the following sections will show how Pixar focuses on embracing unpredictably, fexibility, and resilience. While Disney used the linear work done by departments of below-the-line workers to make above-the-line animators and executives seem livelier and more creative, Pixar instead promotes its use of nonlinear processes that purport to integrate input from all levels of staff in an egalitarian fashion and inspire the unpredictable chaos that studios like Fleischer and Disney sought to corral and rationalize. By couching their approach to management in the same concept of nonlinearity that drives many of their animation tools, Pixar furthers this collapse between animation as making moving images and animation as management, and it further obfuscates points of labor confict in a black box of computational paradigms.

#### Nonlinear Simulation at Pixar

As Chap. 2 explained, modeling and simulating nonlinear systems requires some element of unpredictability. One way of doing this is simply to use a random number. By inserting some random factor into an otherwise predictable simulation you can see what types of outcomes are possible. This is referred to as a stochastic simulation. Another approach is to use dynamics. Here, if you set up multiple sets of rules that infuence each other in turn (A infuences B, which infuences C, which infuences A) the result is complex and unpredictable. The concept of nonlinear simulation has infuenced a variety of research and engineering felds, including computer graphics and management science. This way of thinking has been fundamental to Pixar since before it was an independent company, and it continues to infuence the way the company constructs itself.

The computer graphics scientists who founded Pixar, William T. Reeves, Loren Carpenter, Alvy Ray Smith, and Ed Catmull, are all well versed in the concept of nonlinear simulation. Indeed, three of the four founders made some of the most important early contributions to nonlinear animation technology. Many of the early technologies they developed are on display in their frst Hollywood contract, the "genesis sequence" in *Star Trek II: The Wrath of Khan* (1982), which they completed while still under the aegis of Lucasflm's computer division. The genesis sequence was led by Alvy Ray Smith, a computer science researcher who had been teaching at NYU. Smith did his doctoral research on cellular automata, the type of simulation where simple rules applied to a grid of squares lead to unpredictable patterns through dynamic interactions. The sequence also prominently features former Boeing researcher Loren Carpenter's fractal topographies,18 and former researcher at the University of Toronto William T. Reeves' particle systems, a way of rendering volumetric things like fre or smoke as individual points whose motion paths are generated algorithmically, creating nonlinear shapes and movement.19

The genesis sequence is the frst of many examples of nonlinear animation that this group would produce. Following their separation from Lucasflm, Pixar released a series of new nonlinear simulation techniques at the Association for Computing Machinery's special interest group on graphics (SIGGRAPH), with each new technical advance illustrated with an animated tech demo. For example, in 1986, Reeves worked with researcher Alain Fournier to develop a system for animating ocean waves and rippling fabric, a technique they showcased in a demo titled *Flag and Waves*. Pixar has consistently developed nonlinear simulation technology since. Many subsequent technologies came from long-time Pixar scientist Michael Kass. Over his career Kass helped develop an infuential fuid simulation,20 a cloth simulation used in the short *Geri's Game* (1997), and Pixar's FizT dynamic hair and cloth animation software, which was used in *Monster's Inc*..

When a new nonlinear animation technology is on display in a Pixar feature, the flm is accompanied by a media campaign that promotes the new technology through press pieces, DVD extras, and promotional videos. For example, a piece on the tech website CNET about *Finding Dory* (2016) talks about the technology Pixar developed to animate the elasticyet-soft properties of an octopus and also their new "auto-swim" software for procedurally animating fsh.21 This rhetoric of innovation is echoed over and over in promotional material. For example a video produced by Pixar for *Monster's Inc.* resembles a sort of infomercial, with a cast of famous voice actors like Billy Crystal and James Coburn praising how revolutionary Pixar's FizT fur and cloth simulation software was at that time.22 A popular trope of this kind of promotion is to compare the challenges of the last feature project to the one currently being promoted. In a website article for *Finding Nemo,* supervising technical director Oren Jacob is quoted as saying "This flm is far more complicated than 'Monsters, Inc.' in that almost every shot involves some kind of simulation program or simulated movement."23

Depictions of these nonlinear animation tools also gesture toward their role in production labor as well. They highlight their new technology's ability to do animation work instead of animators. For example, an uncredited worker in Pixar's promotional piece for FizT explains that animating every single hair on a furry creature would be impossible, "all our animators would quit." The solution was to remove this task and displace it onto a new technology so, in his words, "our animators don't have to worry about it at all."24 An article in *Wired* on FizT that coincided with the release of *Monster's Inc.* offers a rather idiosyncratic take on this logic. The author argues that while flms with spectacular computer graphics have a tendency to suffer in terms of story, the way these technologies save animator labor allows them to focus on nuanced characters and story instead.25 This discourse resembles the logic of some of the representations of early industrial animation. Just as Walt Disney Productions represented the work of inkers and inbetweeners as allowing animators and managers to focus on creative work, nonlinear animation tools seem to provide a way of minimizing and compartmentalizing repetitive work. Much like registration pegs or rotoscoping, nonlinear simulation technologies are management tools.

With both the cases of Walt Disney Productions and Pixar, this emphasis on enabling creativity puts a very positive face on management tools that make for a more compliant and productive workforce. Nonlinearity puts a different frame on management than these early industrial techniques though. If Bray, Fleischer, and Walt Disney studios depicted industrial management as enabling creativity because it enabled direct and linear control, Pixar sees management enabling creativity because it occasions the unpredictable as well as fexible, responsive control. Fostering the unpredictable and shaping the results works just as well for simulations of monster fur as it does for a workforce, in Pixar's vision. This approach positions creativity and control so close as to be indistinguishable.

#### Management Science

Pixar's application of nonlinear animation to management did not happen in a vacuum. Management science is one of the many felds where nonlinear simulation has been employed as a tool. The jump from animation software to management theory is not as great as it might seem because nonlinear simulation is applied to all manner of complex, unpredictable phenomena. It has become an important part of many engineering and research disciplines, including aerospace, geology, climate science and meteorology, economics and fnance, and social science. In many cases the development of tools for these disciplines has come to constitute its own feld in computer science. While simulation techniques for animation are shared at the computer science special interest group ACM SIGGRAPH, management science simulation technology is shared at the Simulation and Modeling group (SIGSIM) or the Management Information Systems group (SIG MIS).

While management science follows the tradition of Taylor in that it theorizes management, management science and scientifc management are two discrete concepts with important historical and conceptual differences. While scientifc management studied particular industrial cases so that their effciency and productivity might be improved, management science studies the organizational nature of systems themselves. Management science, in other words, seeks to uncover fundamental principles of systems in order that they can be applied as organizational techniques. As historian of management thought Morgen Witzel writes, "Scientifc management was about exploring new methods; management science was and still is engaged in the quest for systems."26 This implies a different way of seeing the world, a different epistemology, an epistemology that is entwined with computer science and nonlinear simulation.

When it emerged management science was covalent with concepts like Norbert Wiener's cybernetics and Ludwig von Bertalanffy's general systems theory.27 The key early text of management science, Stanford Beer's *Cybernetics and Management*, frst published in 1959, demonstrates this very clearly.28 Early cybernetics and systems theory were generally focused on self-regulating systems. In other words, they sought to understand how systems stayed stable through self-correction, as the name, from the Greek *cybernḗtes*̄ or *steersman,* indicates.29 Computers, of course, provide an important conceptual model and tool for modeling such systems.

Very quickly though the paradigm of homeostasis began to be replaced with a greater interest in the way a given system interacts complexly and dynamically with other systems. Simulation tools were an important part in this turn. Even the very earliest computer simulations provided an opportunity to factor in for unexpected and random events. For example, the frst computer simulation, the Monte Carlo Method, was a stochastic simulation. *The Encyclopaedia of Operations Research and Management*  *Science* offers the following example of how simulations such as these are used in complex management science scenarios. Say a small company signs a new supply contract and they want to be prepared to fulfll it. They can simulate the stages of manufacture using discrete-event simulation and simulate the orders coming in at random intervals, thus testing their preparedness for an unpredictable number of orders.30 Nonlinear simulation is thus a powerful tool for testing management systems against the unpredictability of reality.

The key text that summarized the use of these techniques for management science was Jay Wright Forrester's 1961 book *Industrial Dynamics*. During the 1970s these ideas evolved even further, from asking how an organization could maintain stability in the face of uncertainty to asking how organizations themselves change. The concept of resilience originated in a 1973 paper by Canadian ecologist C. S. Holling, titled "Resilience and Stability of Ecological Systems." At its most simple, Holling's contribution was to ask why we assume the natural world is in a state of equilibrium when it is clearly not. The concept of organizational resilience extends from this because it asks how systems persist despite the fact of constant change. Holling writes that resilience, "is a measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between populations or state variables.31" While randomness, dynamics and unpredictability had been used to understand and manage systems for some time, Holling's study of ecological resilience was a new way of thinking that did not abhor change but instead embraced it as a necessary component of preserving a system. Resilience joined various other popular concepts in the 1970s such as chaos theory, catastrophe theory, and fractals that made unpredictable nonlinearity central and even desirable.

Nonlinear simulation has continued to play an important role in management science since. The contemporary management discipline of business process management (BPM) makes extensive use of nonlinear computer modeling.32 A subfeld of study focused on the management of "creativity-intensive-processes" has also emerged, and it too is informed by these principles.33 The combination of computational tools and an epistemology of nonlinear systems have led to forms of management that are more fexible and responsive, more at home with uncertainty and contingency. If Taylor was focused on increasing output on a regulated, standardized, repetitive, linear production line, these techniques are more focused on processes with uncertain outcomes and on fexible, responsive forms of management. As the following section will discuss, Pixar's representations of their management theory show the infuence of some of these more recent management science discourses, although Pixar puts their own infection on management because their approach is also informed by their nonlinear animation tools and with their connection to the history of animation. They approach workers like a simulation of splashing water or fowing hair. This approach offers a more harmonious vision of management that minimizes dissent, offering a sanitized, technologized vision of creativity.

#### Management Theory at Pixar

Pixar promotes its management theory every bit as vociferously as it does its latest nonlinear animation technology. Representations of management can be found in DVD extras and public relations pieces in the press, just like their representations of new animation technology. A few company executives have also published articles and books that discuss the studio's approach to management. The most famous of these is of course former Pixar CEO (and soon to retire head of Disney Animation Studios) Ed Catmull's book *Creativity Inc..*

Pixar emerged amid a discourse of a post-industrial information society that privileges cognitive and symbolic labor and a discourse of technological disruptive innovation. One of the dominant paradigms of post-Fordist management focuses on the idea of workers as independent, self-managed, and self-promoting contractors.34 In the case of technology industries this takes the shape of entrepreneurship and disruption, as promoted by management theorist Clayton Christensen's book *The Innovator's Dilemma* and by the resurrection of Joseph Schumpeter's theory of creative destruction, which pits the heroic entrepreneur against the entrenched power of bureaucratic "giant concerns."35 The discourses of self-management, worker autonomy, and entrepreneurialism are certainly present in Pixar's representations of itself. Its identity as the studio that brought "Silicon Valley bravado" to Hollywood and disrupted entrenched powerful studios is a classic of heroic Schumpeterian entrepreneurship.36 Yet Pixar's identity and its approach to management are more complex than this context would suggest.

Pixar's representation of its management theory fts delicately into its greater corporate image. On one hand, Pixar seeks to style itself in the tradition of industrial animation studios, especially Walt Disney Productions.37 *The Incredibles* featured a cameo from the two living members of Disney's "nine old men," Ollie Johnston and Frank Thomas, and in his book Ed Catmull notes the infuence his childhood hero Walt Disney had on him. Catmull describes with admiration how Disney would acknowledge the importance of his forerunners like Winsor McCay and the Fleischer Brothers, suggesting Catmull himself does the same.38 On the other hand, as Malcolm Cook notes, Pixar has been a hardware and software company for a signifcant part of its history.39 As a company that was once owned by Steve Jobs, had an IPO two years before Amazon, and weathered the dot-com bubble like few other start-ups, it is one of the archetypal Silicon Valley tech companies (even if it is based in the East Bay). Pixar's approach to management straddles these identities of animation studio in the Fordist tradition and Silicon Valley tech company in many ways, and their use of nonlinear paradigms as a kind of animation frequently works to smooth over potential contradictions between the two.

The infuence of a nonlinear epistemic paradigm is quite evident in Catmull's writing on management. He situates the company's relationship to the unforeseen and unpredictable as the core of their management theory. In *Creativity Inc.* he discusses people's innate fear of "random unforeseen events" and our tendency to look for patterns rather than randomness.40 He contends that we should overcome this way of thinking and embrace the reality of randomness by designing organizations so that failure and the unexpected are not fatal threats. In *Harvard Business Review* he writes "we as executives have to resist our natural tendency to avoid or minimize risks."41 One can see the clear infuence of management science and of the epistemology of nonlinear simulation in these statements. Rather than seeing "unforeseen random events" as something to resist and avoid, Catmull, like C. S. Holling, emphasizes how unpredictable complexity and randomness are central to any organization.

Further points Catmull makes seem to have been inspired directly by nonlinear animation technology. He writes, "to my mind randomness is not just inevitable; it is part of the beauty of life… The unpredictable is the ground on which creativity occurs."42 The language he uses here puts one in mind of a water simulation, or the "genesis sequence" from *Star Trek II*. Animations such as these revel in the natural beauty of randomness and complexity, just as Catmull does in this quote. Indeed, Catmull makes this connection quite explicit. He writes that his insights about randomness and the unforeseen are attributable to his background in mathematics and physics.43 Catmull is clearly infuenced by the epistemology of nonlinear simulation. Some further examples demonstrate how this fts in with the rest of Pixar's image as both animation studio and tech company.

Pixar's techniques for fostering the unforeseen and the unpredictable focus on three interlinking strategies. All of these strategies are addressed in Catmull's book, but they can also be found in a diversity of promotional material produced by the studio. Much of this material represents these strategies with notable lock-step consistency, indicating how carefully the studio controls the representation of its management theory. First is what Catmull refers to as "protection," that is, giving workers space to develop their ideas, experiment, and fail. Second is the studio's identity as a nonhierarchical organization that invites input from all workers, which is modeled on the concept of "total quality management" (TQM). Third is the way the architecture of the studio's buildings is designed to foster unpredictable interactions between workers. Notably, these principles contradict many of the Taylorist and Fordist principles promoted by the likes of Fleischer, Bray, and Walt Disney Studios. All of these strategies promote the opposite of effciency and direct top-down managerial control. Indeed, behind-the-scenes material about Pixar offers a sort of spectacle of superfuous labor. Scholars including Paul Flaig have described *Creativity Inc.* as a sort of manual for post-Fordism, in part for the way it promotes concepts like worker self-management.44 Yet there is more to Pixar's approach to management than the difference between industrial and post-industrial management. Looking at the infuence of nonlinear management concepts and the studio's relationship to animation's industrial past helps nuance our understanding.

The frst strategy, Pixar's principle of protection, refers to protecting ideas so that workers have a chance to explore their potential, even if they end up contributing nothing in the end. Catmull writes that experimental projects and new ideas are often sacrifced in the name of effciency. This, he argues, leads to stasis and ossifcation in old established institutions like flm studios.45 There needs to be room for experimentation, for play, in order for the unforeseen to emerge. The idea of protection is tied to the studio's origin myth. In promotional material many senior staff express a great deal of affection for the early days of Pixar when their offces were in Point Richmond because it was a period when the company explored every idea of how they could make money.46 There was room for experimentation and play, even if it was obviously fruitless. Promotional videos about Pixar's history feature fgures such as Pete Docter, Andrew Stanton, and Dylan Brown reminiscing about racing scooters around the offce and trying to lodge stuffed animals in the ceiling.47

While this childish play at Point Richmond pushes the defnition of experimental work to the point of absurdity, it is still related to the idea of protection. The point is that workers have been given the space to experiment, to discover something unexpected. The irreverent and playful activities of Pixar workers are a particularly popular subject for behind-the-scenes footage of the studio for promotional materials and DVD extras. They can be seen sliding across the Emeryville Campus's slippery foors in their socked feet or riding down fights of stairs in cardboard boxes wearing Viking helmets, for example (Pixar, 2005). Pixar has collected several stories of worker playfulness in a series titled *Pixar Studio Stories*. These shorts are narrated by Pixar workers with accompanying limited two-dimensional animation, conveying a sense of childish playfulness. They frst appeared at promotional videos and Blu-ray extras for the release of *Toy Story 3* (2010) and the re-release of *Toy Story* (1995) and *Toy Story 2* (1999) on Blu-ray, but Pixar has continued to make them since. One story describes the studio's annual Halloween costume contest,48 another the studio's annual battle-of-the bands called Pixarpalooza.49 These activities, the narrator stresses, are not the product of over-achieving human resources workers. They are organized by the workers.

Perhaps the most classic example of this is the myth of the "Love Lounge." Animator Andrew Gordon discovered an empty space through an air-conditioning duct behind his offce and set about turning it into a little club house. As he relates in *Pixar Studios Stories,* he thought he would get in trouble when he was found out, but instead management embraced his idea.50 These images of workers goofng off recall Fleischer Studio's representations of their animators' mischievous and unruly hijinks. Both examples strive to imbue the often-monotonous process of creating animations with a sense of liveliness. Yet, as Flaig notes, this "whistle while you work" discourse is a classic of post-Fordist creative labor. Furthermore, stories like the one from Point Richmond are narratives of entrepreneurial beginnings, a trope that evokes autonomy and self-management. These two sides of protection might seem to cooperate quite well, but there are some contradictions here. While Fleischer Studios tended to emphasize their ability to paternalistically manage the liveliness of their animators, Pixar erases any semblance of hierarchy or control. This contradiction is managed by the paradigm of nonlinear simulation, which focuses on the studio's ability foster liveliness, to animate its production process, in other words. Like a nonlinear animation, they set parameters that create the conditions for the unexpected and their management embraces the change that follows.

Richard McCulloch notes that these sorts of behind-the-scenes extratextual communications style the studio brand as fun and playful, and thus condition the reception their flms.51 Pixar's promotion of its theory of protection works in harmony with this playful corporate ethos. The way they create spaces for creativity to happen conveys that the studio is a fun and playful place, but it also conveys the studio's ability to extract ideas from their creative workers through a technologically advanced management theory that is styled along the same principles as their computational animation technology.

The second strategy, Pixar's supposedly anti-hierarchical management philosophy, serves a very similar function as the concept of protection, ameliorating their corporate image, communicating the effectiveness of their nonlinear organizational paradigm, and negotiating their hybrid identity as modern tech company and traditional animation studio. Pixar's organizational identity is grounded in the principle of total quality management (TQM). Developed by mathematician W. Edwards Deming during the post-war reconstruction of Japanese industry, TQM puts the responsibility of ensuring the quality of a product on all workers.52 This puts it at odds with the Taylorist emphasis on effciency. The classic example of TQM manufacturing illustrates this point. Under the Taylorist paradigm the assembly line is supposed to never stop, because this has catastrophic consequences for output effciency. TQM holds that any worker should be given the power and responsibility to stop the line if they see a problem. This is an early example of post-Fordism because it shifts the responsibility of management onto the worker. Catmull was an early adopter of TQM in the United States. He describes it as "making production a creative endeavour that engages workers."53 TQM is not itself an example of the infuence of nonlinear thinking. It emphasizes the uniformity and consistency of a product, abhorring the unpredictable. Yet in the hands of Pixar it both connects with animation's industrial past and becomes infected with nonlinear epistemology. It becomes a way of embracing unpredictable change.

In his book on Pixar CFO Lawrence Levy argues that the strict hierarchies of Hollywood studios and even large tech companies like IBM make them change and risk averse because everyone is trying to protect their position.54 Any disruption endangers entrenched power. Undermining hierarchy therefore invites the unexpected. Consider how Fleischer Studios and Walt Disney Productions represented their labor organization. The majority of the workers were a linear means to an end. Directors, producers, and top-level animators were constructed as the sites of creativity and the workers merely followed their instructions. In the case of Fleischer Studios most of the workers below the animator were invisible, while Walt Disney Productions compartmentalized their "pretty girl" inkers and inbetweeners into their own buildings. These hierarchies served a discourse of creativity in their own way, but they are highly linear. Pixar's TQM approach, by contrast, is not represented as a linear process of workers completing tasks exactly as directed. Instead, they "empower" workers to have input on the fnal creative product. The flm thus emerges from an unpredictable nonlinear process that includes many sources of input. Catmull writes in *Harvard Business Review*, "Creativity must be present at every level of every artistic and technical part of the organization. … It's like an archaeological dig where you don't know what you're looking for or whether you will even fnd anything. The process is downright scary."55 Like Walt Disney Productions, Pixar is quite keen to show off their approach to management and sew it into their defnition of animation. For Walt Disney Productions, linear means of control allow Disney to style himself as the creative agent behind cartoons, even if he does none of the animating himself. With Pixar, their TQM styled approach to management demonstrates their vision for animation as creating conditions for unpredictable things to happen.

As with all of these examples, this talk of non-hierarchical organization likely does not refect actual practices. There are, of course, labor divisions within Pixar. And it is not as though anyone can make a change without oversight.56 Indeed in other cases Pixar paradoxically promotes more hierarchical features of their company like the "brain trust," a panel of the most senior creative minds.

Pixar's third management strategy is workspace design, more precisely the design of their studio headquarters in Emeryville, California. The Emeryville campus and its single main building, the Steve Jobs building, are the most central feature of their corporate identity.57 The Emeryville campus was constructed following Pixar's second feature *A Bug's Life* (1998) and their 1995 IPO, which raised the considerable funds necessary for such a project. The central design principle of the campus is fostering random unpredictable events. In the early stages of planning the building several ideas were put forth. John Lasseter, a director with a background in Disney animation, wanted to have a different building for each production.58 By contrast the fnal design saw everyone in a single building, with technical staff on one side and animations staff on the other, and both forced to congregate in the center.59 At the center is a large atrium with a coffee bar and a large staircase connecting the frst and second foors. The purpose behind this design is to cause interactions between workers from different departments. Indeed, an early design had only one set of bathrooms in the middle of the building, forcing maximal interaction.60

The story of the Steve Jobs building's design is rehearsed constantly in behind-the-scenes public relations studio tours, with titles such as "Behind the Scenes at Pixar,"61 and "A Rare Look Inside Pixar Studios."62 All of these publicity pieces follow almost the exact same set of talking points. Pieces by *The New York Times*, *The Guardian,* and *The Huffngton Post* all marvel at the playful work culture and the Love Lounge, and they all talk about the design of the building as a way to foster "chance meetings."63 Much like Pixar's concept of protection and their anti-hierarchical TQM philosophy, these stories model Pixar's corporate ethos as both playful and dynamically innovative. The visual imagination of these interactions is homologous with some of the software Pixar founders themselves developed for animation, such particle systems or cellular automata: simulations where individual points moving along unpredictable paths interact with each other and give rise to unexpected shapes and movements. One can imagine a blueprint of the atrium with such a simulation overlaid, with all of the employees represented by dots moving in random direction as they collide and interact. The atrium is a designed set of parameters that occasions unpredictable nonlinear events, a sort of synthetic creativity, a creativity that emerges not from direct deterministic fulflled orders and effciency but from mistakes and collisions.

The Steve Jobs building is perhaps the best symbol of the hybrid logic of Pixar's representation of its management theory. On one hand there is something very unlike post-Fordism about the building's representation. While other similar industries like the visual effects industry hire workers on six-month contracts and subcontract out work to competitive bidders, Pixar emphasizes having a brick-and-mortar building where they retain and cultivate their workers.64 Indeed, their desire to retain workers became the subject of controversy when it was revealed they had secret agreements with other studios not to poach each other's workers. As Eric Herhuth points out, this desire to retain workers, even to the point of breaking labor laws, contradicts "California-style" neoliberalism.65 However, on the other hand, the Steve Jobs building is distinctly unlike Walt Disney Production's approach to dividing labor into different buildings, the way John Lasseter apparently originally wanted. The building reinforces the image of Pixar as a non-hierarchical fun factory. Thus, neither of these discourses is suffcient to explain Pixar's promotion of the building. In order to understand it, one needs to understand the logic of nonlinear simulation behind their management theory. The Steve Jobs building is like a simulation, it is a feld, a set of parameters, within which unpredictable creativity takes place. Pixar builds a box for their workers to be independent in. All of these examples of protection, worker playtime, non-hierarchical structures, and the Steve Jobs building's design, are about creating conditions and manipulating parameters to occasion the unexpected and the unlooked-for. This discourse offers a harmonious, utopian, vision where there is no confict between creative chaos and control.

#### Animating Offices

Vivian Sobchack, Paul Flaig, and Eric Herhuth have all noted the way Pixar flms like WALL-E build a bridge between the past and the present, between an age of mechanical photochemical media and Fordist employment and an age of frictionless digital media and self-managed creative work.66 In Herhuth's words they "mitigate extremes and render transition more palatable."67 One could certainly interpret Pixar's representation of their management theory in this way, as a bridge between past and present, but it is frst and foremost a bridge between the identity of an animation company and a technology company, between culture and technology. Pixar promotes a hybrid approach to management, in tune with both animation's industrial past and post-Fordist management principles. They do this by employing a computational logic that seems to have the ability to resolve any confict, to maximize control, and respond to unforeseen change while also facilitating creativity and animating workers. They animate workers much the way their nonlinear software animates pixels. Far from being halfway between the present and the past, this approach to management has been disseminated by its proponents as a solution to the demands of cognitive capitalism and creative work.

Studios like Walt Disney Productions constructed animation as management. Through the studio's mobilization of regimented disciplined workers, they spared their creative minds the labor of physically doing the animation, enabling them to direct the process. Their promotional material uses this above-and-below-the-line division to reassure the audience that even though hundreds of people worked on a cartoon, it was the product of a few creative minds. Their work as animators was directing the work of others. Pixar, by contrast, constructs management as animation. Through the vital animating force of nonlinear contingency Pixar constructs its vision of management as enlivening work by introducing unpredictability. This is a different way of conceiving of control, one where the job of the studio is not to control the liveliness of workers, but to enliven them.

Given Pixar's now mythic status as a successful Bay Area tech company once owned by Steve Jobs, and given the success of Ed Catmull's essays in the *Harvard Business Review* and his book *Creativity Inc*., this conception of management as nonlinear animation has spread to varied businesses that seek post-Fordist goals of creativity and innovation. True, concepts like open-plan offces and designing for creativity have been around at least since advertising company Chiat/Day opened their radical new offces in 1994, which *Wired* describes as being motivated by "egalitarian utopianism."68 But as Nerf guns and pet dogs proliferate in urban offce spaces, it is diffcult not to see the logic of nonlinear animation at work, enlivening workspaces and setting the conditions for unpredictability. The history of animation and of nonlinear simulation has had a greater infuence on contemporary work life than one might assume.

Pixar's approach to management offers a recent chapter in the egalitarian utopianism of both offce spaces and technology. Their seamless harmonious image of managing creativity recalls some of the more utopian visions of Californian techno-neoliberalism. Richard Barbrook and Andrew Cameron's essay on the "Californian ideology" was one of the frst to note how an "emancipatory faith" in new technologies is linked to a "libertarian form of politics" that elides real world inequities.69 This use of emerging technologies to erase politics is also at work in the discourse of digital media platforms, which present themselves as "neutral" and "egalitarian," hiding the way they exercise algorithmic control and the political consequences that they entail.70 As recent work by Cathy O'Neil and Safya Noble demonstrates, algorithms are far from being politically neutral.71 Instead, the logic of algorithmic control can act as a veneer of objective neutrality that makes bias seem impossible and irrelevant. But of course, as recent events have made clear, Pixar is not a harmonious utopia for everyone. News came out in 2017 about Lasseter's pattern of sexual harassment.72 Furthermore in 2018 former Pixar graphic designer Cassandra Smolcic criticized the pervasive sexism baked into Pixar culture, offering a variety of examples based on her fve years working there.73 Interestingly, in Lasseter's vaguely contrite memo to the workers of Pixar that responds to accusations of sexual harassment he frames his behavior as a failure in managing creativity. He writes, "This kind of creative culture takes constant vigilance to maintain. It's built on trust and respect, and it becomes fragile if any members of the team don't feel valued."74 Even when issues of justice and equality emerge, Pixar's approach to management feels sanitized of history, politics, and difference. Instead, it is all about creating conditions and parameters.

A version of this chapter appeared as an article in *Animation: An Interdisciplinary Journal*.

Jordan Gowanlock, "Animating Management: Nonlinear Simulation and Management Theory at Pixar," *Animation* (15:1) pp. 61–76. Copyright © 2020. DOI: https://doi.org/10.1177/1746847719898783.

#### Notes


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

### Cinematic Chaos, Catastrophe, and Unpredictable Embrace

In the halls of research universities and R&D labs, specialists have been trying to make sense of the world through nonlinear simulations for seventy-fve years. The transition from being a niche scientifc paradigm to becoming an epistemic frame broadly held by the public has been gradual. The 1970s saw concepts like catastrophe theory and fractals gain a certain amount of public awareness, then chaos theory the following decade. In the late 1980s and early 1990s cultural theorists like N. Katherine Hayles began to refect on what effects these ideas were having on literature and the arts,1 and postmodern theorists like Jean-Francois Lyotard named several of these discourses as a sign of the breakdown of "grand narratives" in the sciences.2 These exotic and strange forms of mathematics were the more salient side of a general shift that was seeing parts of everyday life like weather, climate, economics, and public health being understood through nonlinear simulation. When something goes wrong, we have become accustomed to looking to such models to make sense of things. In the 2006 lecture flm *An Inconvenient Truth*, Al Gore uses climate simulations to convince viewers of the gravity of climate change, showing us what the world will look like if we do nothing. Models and projections have become accepted points of reference in the press for everything from epidemic outbreaks to market crashes. Nonlinearity has become the embodiment of unpredictable change. We increasingly see events as the product of a random stochastic factor inserted into an equation, or the complex dynamic interaction of many different factors within a system. Yet we have also started to place our faith in simulation to understand, predict, and even beneft from nonlinear chaos.

Popular feature flms that portray unpredictable change demonstrate this cultural epistemic shift over time. Films during successive periods since 1982 show nonlinearity increasingly becoming a way to represent an unforeseen threat. Simultaneously, simulation has emerged in these texts as a tool that offers reassurance in the face of this unpredictability. Nonlinear animations play a key role in rendering these threats and reassurances on screen by visualizing unpredictable phenomena like waves, storms, and planetary catastrophes. This span of time can be approximately broken down into three key phases. In the earliest of these periods nonlinear animation is used to create diegetic simulations that evoke a sense of totalizing computational mastery. In the late 1990s and early 2000s, it is used to convey the menace of unpredictable, catastrophic events, while diegetic scientifc models offer the reassurance of prediction and understanding. Finally, and most recently, nonlinear animation is deployed in animated features to represent an at-frst threatening unpredictable force that characters befriend and harness for good.

Understanding the way these nonlinear animations make meaning and the way they interact with the themes and narrative of these flms requires some initial theoretical consideration. The images discussed in this chapter are unquestionably spectacular. They feature prominently in the promotional material for the flms in which they appear, and the texts tend to dwell on them, giving the spectator time to marvel at their technical feats. Annette Kuhn and Andrew Darley have used the term spectacle to emphasize how divorced visual and special effects are from the narrative and themes of the flms that feature them.3 Some see this spectatorial divorce as a positive quality, following Tom Gunning's suggestion that the "cinema of attractions" mode of spectatorship "goes underground … as a component of narrative flms, more evident in some genres."4 From this perspective, effects spectacles have the capacity to be refexive, to draw attention to themselves as media. As Dan North puts it, special effects are always about the relation between "the real and its technological mediation."5 As Chap. 3 noted, such moments of spectacular effects have often served to forefront a new flmmaking technology and position its industrial status. Moments like Al Jolson's synchronized sound performance in *The Jazz Singer* (1927) declare to audiences that this is the future of cinema. In these moments feature flms act like a sort of SIGGRAPH promotional "tech demo."6 Effects spectacles refect on the technologies of their making because they represent a suspension of narrative.

Consistent with this established scholarship on spectacular visual and special effects, the examples in this chapter offer moments of suspension where viewers marvel at a spectacle. These effects are legible as effects. When you watch the mountains in *Star Trek II*: *The Wrath of Khan* (1982) or the gigantic waves in *The Day After Tomorrow* (2004) or *Frozen II* (2019), you can see the effect. Indeed, you can even tell that there is a special, unusual quality to these effects. They appear too detailed and unpredictable to have been made by hand. These examples are all bold demonstrations of technological sophistication and production scale, yet to say that this is the only meaning they produce would be extremely reductive. All these examples have meaning as a part of the particular texts they are in, and the combination of narrative, theme, and spectacle has some much more nuanced things to say about nonlinearity, risk, disaster, and simulation.

While spectacles are occasions to presentationally forefront effects, this does not preclude them from connecting to the themes and narrative of the rest of the flm. Lisa Purse argues that thinking dichotomously about digital spectacle and narrative "fails to capture the complex manner in which digital literacy of various kinds might intersect with narrative meaning."7 Aylish Wood fnds that these suspended moments of spectacle expand the time and space of the narrative rather than interrupting it.8 Following Bob Rehak's theory of the special effects "micro-genre," Kristen Whissel argues that different types of effects produce different types of meaning within texts as "allegorical assemblages" or "spectacular elaboration(s) of concepts."9 To use Rehak's example, the "bullet time" effect made famous by *The Matrix* (1999) has a particular narrative and thematic function within flm texts.10 Nonlinear animations can be interpreted as one such micro-genre, and they interact with the themes and narrative of the flms in which they appear in important ways.

The nonlinear animation spectacles profled in this chapter use nonlinearity to dramatize the threat of the unpredictable and unknown. Economic crashes, natural disasters, organizational collapse, and climate change are all represented through nonlinearity in these flms as a way of communicating their unpredictability, non-anthropocentrism, and meaninglessness. At times they conjure a sort of Old Testament rendering of catastrophic change, presenting us, like the biblical Job, with adversity we cannot comprehend. Yet nonlinear simulations also frequently posed as a way to model the unpredictable and give meaning to the meaningless. Diegetic computer models explain the mechanisms behind the disaster, and often a scientist predicts it, only to have their predictions fall on deaf ears. In the most recent examples, unpredictable chaos becomes something to embrace and beneft from. These uses of nonlinear animation are a continuance of cinema's long history with contingency. Mary Ann Doane observes that cinema codes contingency into a "representational system while maintaining both its threat and its allure."11 Nonlinear animation is deployed in these flms in a similar way, both to representing a threat and to managing and making sense of that threat, even to convey as a sense of empowerment against the threat.

This theme of empowerment is something Lisa Purse and Scott Bukatman have already recognized at work in spectacular special effects. Bukatman describes the spectatorial pleasure of, "conceptual mastery over the complex,"12 and Purse writes about how digital special effects sequences, "can dramatize power relations forcefully, articulating fantasies of empowerment in which the mastery of the visible offered by the sequence metaphorically correlates to the physical mastery or dramatic disempowerment of the protagonist."13 Both of these examples emphasize embodiment in relation to this experience of empowerment or disempowerment, but the following examples will be less focused on the representation of bodies and more on the representation of software users as the empowered.

The following sections cover three key eras in nonlinear discourse and examine the way they appear in narratives, themes, and visual effects aesthetics on screen. Each era demonstrates how society was beginning to understand the world through nonlinear paradigms and how it was seeking to develop tools to master unpredictable change. The frst covers chaos theory and fractals as a way of modeling ecological change, the second covers terms like "perfect storm" and "catastrophe" as systemic and nonlinear ways of styling the ontology of disasters, and the fnal section covers the fexible and responsive approaches to management discussed in Chap. 5 that seek to capitalize on the creative power of unpredictability. The flms in these three eras all thematically refect on the ontology of emergence: where things like catastrophes, disasters, and even life itself come from. They also use VFX to render a spectacular visual equivalence of this emergence. When combined, they create moments of sublime marvel and horror. Yet these are also all stories of mastery, of contingency predicted and quantifed, and of randomness given meaning.

#### Fractals and Chaos

When nonlinear animation frst appeared in feature flms it was steeped in a visual and narrative discourse of the creation of unpredictable artifcial life. Concepts like chaos theory and fractals had gone from being the subject of arcane mathematics in the 1970s to being part of popular culture in the 1980s. The homology between the shapes produced by these new equations and shapes in nature proved to be fertile ground for wild speculation about computer simulation's ability to replicate natural processes. Between the 1980s and early 90s, flms prompted spectators to marvel at the uncannily naturalistic appearances of these forms, but they also raised the specter of the threat of chaotic artifcial life.

The frst example of nonlinear animation in a feature flm, the "genesis sequence" in *Star Trek II: The Wrath of Khan*, offers a clear example of early nonlinear discourses. In the fctional world of the flm, the genesis project is a technology that can create life. The software used to animate it was itself based on fractal and stochastic mathematics. The flm thus collapses the difference between simulated emergence and real emergence, and the difference between narrative theme and VFX spectacle. The genesis sequence was the product of one of the many early crossovers between military-industrial complex research and the flm industry at SIGGRAPH. In 1980 Loren Carpenter, an engineer at Boeing's Computer Service Department, presented a short animation titled *Vol Libre,* which showed off his technique for drawing realistic looking geological topographies using a combination of fractals and stochastic computational processes. By inputting a few parameters, Carpenter could automatically generate a realistic landscape of mountains and valleys. When Carpenter showed *Vol Libre* in at the end of his 1980 SIGGRAPH presentations, the crowd allegedly erupted in applause and demanded a second viewing. After his talk, he was immediately offered a job by Alvy Ray Smith and Ed Catmull at the Computer Division of Lucas Film.14 Lucas Film would put Carpenter's technique to use almost immediately on *The Wrath of Khan's* genesis sequence. Carpenter would soon go on to design other nonlinear technologies, like the L-system technique used to generate realistic foliage in *The Adventures of Andre and Wally B* (1986). Much like Carpenter's topographical fractals, L-Systems simulate natural patters. First discovered by a botanist, they are algorithms that imitate different bifurcation patterns of plant branches. Both technologies traded on the idea of computer graphics having computational realism that could produce visual realism.

In *The Wrath of Khan* Captain Kirk's (William Shatner) United Federation of Planets has developed a technology called the "genesis device," which can terraform barren planets and cover them with plant life. This seemingly benign technology is stolen by the genetically modifed super-villain Khan Noonien Singh (Ricardo Montalban), who intends to use it as a weapon. The sequence Carpenter made for the flm is an explanatory interlude where the genesis device's function is demonstrated using computer graphics. The entire sequence takes place on a diegetic computer console screen, with Kirk, Dr. McCoy (DeForest Kelley) and Mr. Spock (Leonard Nimoy) gathered around the screen, viewing the demonstration. As a classifed military demonstration, the sequence serves a similar diegetic function to the computer graphics technologies that preceded it at Boeing and SIGGRAPH. The narrating scientist even refers to the fact that this project's research would require further funding from the Federation of Planets. One could imagine such a flm being made by Boeing to inform a military general about some new aerospace technology. The diegetic simulation and the nonlinear animation it is made of are remarkably close.

In the computer graphics sequence the camera starts at space scale, viewing an entire dead planet in a feld of darkness. The camera then begins its ballistic journey forward, zooming down to the surface of the planet. While the camera pans to the left to see the surface of the planet, it continues to hurdle forward through space. The entire sequence is one continuous shot, save for a cut back to the crew viewing the sequence to show their reaction. This aesthetic combination of extreme camera movement and long take is a standard of computer graphics tech demos of the time, because is aggrandizes the ability of computer graphics to render complete three-dimensional (3D) space.15 Before computer graphics, moving the camera in animation required laborious processes like Disney's multiplane camera or John Dykstra's motion control camera. This shot would have been impossible with the multiplane camera because the camera pans while moving forward at the same time. As Thomas Lamarre notes, Disney's use of the multiplane camera only allowed for a "bullet's eye view" looking straight ahead as it moved through transparent layers. Here any type of movement seems possible. Thus, the camera movement in this sequence functions as a demonstration of what digital animation can do, and Carpenter's stochastic shapes serve a similar function.

As a warhead hits the dead planet an explosion takes place on the surface, causing a catalytic reaction, and a fery red shockwave, animated using William Reeves' particle systems, encircles the planet. The fat planetary surface then begins to grow new topography with mountains ranges and valleys. Troughs fll with water and exposed surfaces grow green with vegetation, using Carpenter's fractal drawing technique. As with *Vol Libre*, the shape of these features is legibly computational in nature. The lines of the peaks and valleys look too unpredictable and infnitely detailed to have been done by a human hand. They exude naturalistic computational origins. They both illustrate the flm's theme of artifcial creation while functioning as a sort of tech demo for nonlinear animation.

Like the other digital VFX milestone from the same year, *Tron* (1982), *The Wrath of Khan* contains digital effects within the frame of a diegetic computer rather than attempting photorealism. The images are presented as a computer simulation of what would happen if the device were used. The genesis sequence visualization is thus a diegetic futuristic simulation made using an actual futuristic simulation. In many of the following examples in this chapter there will be diegetic digital screens displaying data and simulations. In those cases, though, the VFX spectacle lays somewhere else in the flm as a photorealistic image. Here, the diegetic simulation and the nonlinear animation coincide. While critical discussion of digital effects over the last few decades has been dominated by the discourse of synthetic photorealism, there is a different sort of realism at work here: the realism of computational mediation and nonlinear simulation.

Beyond the mere presence of a simulation, the flm taps into several concepts from the science of nonlinear systems and simulation, which were becoming familiar to the public at the time. One important concept is fractals.16 A fractal is an algorithm that generates infnite nested selfsimilar patterns. In the 1970s mathematician Benoit Mandelbrot argued that these algorithms had the potential to describe shapes and processes in nature.17 For example, in a paper for the journal *Science* he argues that the British coastline is a fractal patter because the closer you study it the more complex and detailed its shape becomes.18 Fractals had a level of popularity in the 1980s and 90s that far outpaced their utility. They exceeded computer science and mathematics circles to become a part of popular culture. The visually compelling way fractals rendered nonlinear complexity made them extremely popular: fodder for dorm room posters and blockbuster movies. This trend started with Heinz-Otto Peitgen and Peter Richter's book *The Beauty of Fractals* (1986). Although the book was fairly technical, it featured large full color prints that anyone could appreciate.19 Three years later, the New Museum in New York featured an exhibit titled "Strange Attractors" (1989) which featured fractal renderings of chaotic phenomena.20 Fractals soon became a household name.

The animation in the genesis effect sequence is a modifed fractal. Carpenter's contribution was to add stochastic elements and map the fractals into 3D surfaces in a way that seemed to mimic the unpredictable processes that formed the earth's surface. As the narration in a Lucasflm making-of video titled *Computer Graphics in Star Trek II: The Wrath of Khan* describes it, "the fractal technique is a form of controlled randomness which ads a nature-like dynamic complexity to simulated scenes."21 The way one can grow the crystalline structure of a fractal, and the way that structure can convincingly mimic patterns in nature, tempts the assumption that computation is not simply a theory that models reality, but rather that it *is* reality in some respect.

Although the making-of video is careful to describe Carpenter's animation as "nature-like," the flm collapses the difference between simulation and natural process. Another example of this discourse from the same period is John Conway's Game of Life*,* noted in Chap. 2*.* Based on John von Neumann's concept of cellular automata, Game of Life uses a grid of squares with specifc rules for being on or off to produce emergent, evolving patterns, where squares seem to coalesce and form functioning entities. Enthusiasts developed a taxonomy of different life forms that emerged from Game of Life simulations with names like "glider guns."22 Much like fractals, Game of Life was a popular program to play with on expensive institutional computer systems. Conway's program shows how fundamental nonlinearity is to this discourse because it is emergent in nature. Just like any form of nonlinear animation, the user simply sets initial conditions and parameters, and the software animates the feld of squares with vital complexity.

This discourse is very much a product of the Cold War R&D context that cultivated nonlinear simulation. As scholars like Philip Mirowski,23 Catherine Hayles,24 and Warren Sack argue,25 computer scientists have had a tendency to ignore the difference between a simulated process and a real-world process, assuming that the real world in fact conforms to the logic and characteristics of the computational machines we have built to imitate it. Philip Mirowski calls this discourse "cyborg science" and Warren Sack calls it "computational thinking." Hayles shows how nonlinear simulation in particular plays a part in this discourse using examples like mathematician Steven Wolfram's controversial book *A New Kind of Science*. Wolfram believes the mechanism behind unpredictable processes in the world, like the patterns of leopards' spots, are driven by the exact same mechanism as simulations. They are not a model of the process but the process itself.

The genesis sequence in *The Wrath of Khan* taps-in to these discourses of cyborg science or computational thinking at work in fractals, L-systems, and Conway's Game of Life. The diegetic concept of a tool that could create life on a dead planet overlaps perfectly with these ideas. The sum effect of the sequence is that we are witnessing real digital life on screen. The fact that the genesis device is represented through a diegetic simulation strengthens this connection. As Kristen Whissel theorizes, the narrative and the spectacle are conspiring to emblematically convey a concept. They make this discourse of emergent virtual life sensible in a unique way that narrative, aesthetics, or VFX spectacle on their own could not. They also collapse the difference between simulation and reality, embracing the concept that a simulation could give rise to natural life patterns, and that patterns like the shape of mountains and the spacing of branches on trees follow the rules of computation, rather than the other way around. From this perspective the naturalism, even the realism, of simulated digital images becomes unassailable.

The narrative of *The Wrath of Khan* follows the members of the starship Enterprise trying to stop Khan from using the genesis device as a weapon. The flm thus points toward a potential fear of this creative power. This trope of human endeavor going too far and being humbled by nature recalls Goethe or Shelley, yet the flm does not belabor this perspective. The effect of the genesis sequence is one of technological mastery, above all else. Other flms toward the end of this era begin to emphasize the threat of nonlinearity much more.

The concept of emergent virtual life is still at work almost a decade later in another VFX landmark flm, perhaps the single most iconic digital VFX flm. *Jurassic Park* (1993) is unique among the examples in this chapter because, despite the flm being a VFX landmark, it does not feature any signifcant form of nonlinear animation. Though as Warren Buckland notes, the realistic looking synthetic dinosaurs convey the impression that they are simulated life.26 The flm is further preoccupied with nonlinear simulation concepts and still manages to make a connection between these themes and its VFX spectacles. Thus, it merits brief mention. *Jurassic Park* belabors the connection between the computer code of digital objects and the code of DNA, and through this a sense of mastery not unlike that of *The Wrath of Khan.* Here scientists can synthetize life and sculpt it, and the process is explained using diegetic computer graphic visualizations much like the genesis sequence. The flm's other thematic interest, though, is the nonlinear mathematics of chaos theory. The emergent life of the dinosaurs, and the disaster that ensues, are thus mediated through the concept of nonlinear simulation. Although the flm ends with unruly life run continuing to run amok, the flm has proven to be a popular metaphor for systems management literature. At least three academic articles on management use the flm to discuss the management of nonlinear systems.27

In the flm, a doubting chaos mathematician named Dr. Malcolm (Jeff Goldblum) anticipates that the park will inevitably descend into entropy and disorder, and eventually it does. Edward Norton Lorenz frst developed chaos theory when he was working on early dynamic simulations of weather patterns. Lorenz discovered that any minuscule input into his simulations resulted in wildly different outputs. If he changed one tiny thing in a complex system, everything would be affected. This was a product of the complexity produced by a dynamic simulation. Lorenz also eventually discovered that certain mysterious patterns emerge in wildly complex and unpredictable systems. It was highly counter-intuitive to fnd regular patterns in what should have been unpredictably complex systems. He referred to the mysterious causes of these patterns as "strange attractors." Chaos theory thus concerns the mysterious forms of order in what should be order-less systems. The flm focuses more on the unpredictable aspects of chaos rather than the subject of strange patters. Chaos theorist Dr. Malcolm seems to believe that no complex system can be controlled. Thus, what is most signifcant about the flm's use of chaos theory is the term's popular meaning. The Pulitzer Prize nominated 1987 book *Chaos: Making a New Science* brought these strange yet compelling mathematical ideas to the lay public.28 The popularity of chaos theory was also helped by its connection to fractals, which offer a spectacular way to visualize chaotic systems. Indeed, the New Museum 1989 exhibit on fractals "Strange Attractors" was as much about chaos theory as it was about fractals.

*Jurassic Park* points to a trend that becomes more dominant through the later 1990s and 2000s where nonlinear animation is increasingly used to dramatize the threat of the unforeseen and unpredictable. Therefore, it is a liminal case between *The Wrath of Khan* and later flms, as it is both preoccupied with the topic of artifcial nonlinear life and with the threat of nonlinearity.

#### Perfect Storms, Catastrophe, and Climate Models

While flms in the 1980s and early 1990s were preoccupied with the hidden mainspring of life itself, over time popular interest started to center on things in the world that are more relatable as nonlinear. Some things in our world, like the changing of the seasons or ocean tides, are reliably predictable, but there are many other unpredictable things in the world that we must constantly contend with. Countless people in fnancial industries try in vain to anticipate the unpredictable movement of stock markets. Farmers and mariners have had to contend with unpredictable weather patterns since prehistory. This era focuses in on the ontology of these unpredictable phenomena that affect our lives: where they come from, what they are, and whether we can understand them.

The 1996 flm *Twister* centers on meteorological researchers who are trying to develop a technique for predicting when and where a tornado will touch down. The question they are trying to answer is thus how events precipitate from within complex dynamic systems. Weather prediction has been one of the key applications of nonlinear animation. It is one of the great successes of the technology. While the narrative of *Twister* does indeed tell a story of the triumph of simulation and its ability to predict the seemingly unpredictable, the flm uses nonlinear animation in VFX sequences to conjure a sense of threatening unpredictability to bring the monstrous tornado to life.

The scientists in *Twister* are trying to study tornadoes by flling them with thousands of motion-trackers to understand their behavior and make them more predictable. To do this they must, of course, chase storms, and they are constantly imperiled by the unpredictability of these weather phenomena in dramatic and spectacular fashion. The unpredictability of the storms is key to their menace in the flm. They can appear out of nowhere in a seemingly calm situation, and they are often obscured by darkness or trees. One of the tornado-chasing scientists (Helen Hunt) refects on the horror of the unpredictability of tornadoes, she describes how they "skip this house and that house and come after you." The tornadoes represent, to use Mary Ann Doane's words, "unalloyed contingency," an untamed form of risk that has not yet been given meaning through mediation or scientifc study.29 The flm's approach to the unpredictable emergence of tornadoes recalls a popular discourse from the feld of nonlinear mathematics: catastrophe theory.

Like chaos theory and fractals, catastrophe theory is a nonlinear mathematical theory that grew to have cultural meaning. Rene Thom coined the term to describe the moment a system goes from a state of "smooth change" to a state of "abrupt response."30 In other words, he investigated what happens at the moment when a predictable situation "bifurcates" into unpredictable dynamic change. British mathematician Erik Christopher Zeeman developed Thom's initial ideas further in the 1970s and applied them to topics as diverse as brain modeling, the stock exchange, biology, and the stability of ships. Zeeman's work led to a greater popularization of the concept, likely because he also developed a compelling visual model for his ideas, the catastrophe machine. The catastrophe machine consists of a rubber band fxed at one point, then connected at its midpoint to the edge of a pivoting disk. As you move the unfxed end of the rubber band around, some positions will cause the disk to rotate smoothly, while other cause the disk to rapidly spin back and forth. Charting these positions allows you to visualize the threshold where smooth change tips over into abrupt change. This model offers a compelling illustration of the way a regular, predictable state of affairs turns into an unpredictable situation.

In the flm the opaque, seemingly meaningless way storms behave feeds a sense of fear. In this sense, the concept of nonlinearity is not used to tame or rationalize the unpredictable process, but to animate it. Industrial Light and Magic animated the complex, detailed swirling of particles in the tornado with Wavefront's recently released software Dynamation.31 The animation of the tornado is legibly complex and unpredictable, and that sense of nonlinearity illustrates and animates the flm's themes. The animated tornado is an emblem for these ideas. It conveys a sense of simulated catastrophic unpredictability.

The threat of raw contingency is eventually overcome in the flm when the scientists gather enough data to make a predictive model. The model will fnally provide some answer to the menacing unpredictability of the tornado. The resolution of the narrative sees the scientists succeeding in inserting dozens of motion trackers into a tornado. As the motion trackers are sucked up into the tornado, the scientists look at a computer visualization of their motion paths. This visualization thus represents a different face of nonlinear simulation, one that can tame contingency and give it numerical meaning. With this information they can create models of the phenomenon and learn to predict it. Thus, simulations ultimately provide an opportunity to control the unknown.

Weather is in many ways the archetypal nonlinear system. As I noted, Lorenz's discoveries about chaos came from weather simulations. Weather modeling and prediction have been major drivers of research in nonlinear simulation in institutions like the Los Alamos National Research Laboratory. Weather is also a familiar and relatable kind of contingency. Humans have always been at the mercy of the unpredictable nature of changes in weather. These combined factors make it a prime subject for the epistemology of nonlinear simulations.

Like *Twister*, *The Perfect Storm* (2000) is a blockbuster movie about the weather and the nature of contingency, and it animates unpredictable weather events using nonlinear animation. This flm takes a slightly different approach to understanding emergence though, focusing on the complex conditions that precipitate unlikely events. It also makes reference to economic risk in relation to the weather. The flm is based on the real-life fshing crew of the Andrea Gale, which was lost during a historical weather event that took place in the fall of 1991 in the North Atlantic. This weather event has been referred to as the "Halloween Nor'Easter," "The No-Name Storm," or "The Perfect Storm." In the flm the crew has an unusually good haul of fsh far out in the Atlantic, but amidst their economic success their icemaker breaks down and they are forced to choose between letting their catch spoil or attempting to sail through a storm to get to port. As they enter the storm conditions worsen and the ship is lost with its crew. The story is thus not just about unforeseen events but also about economic risk in relation to those events.

As the crew are in the midst of a worsening storm, the flm cuts to a meteorologist (Christopher McDonald) looking at a visualization of a computer model. He explains to a co-worker how pressure systems and air masses are interacting with the already-formed Hurricane Grace to create a super-storm. The visualization shows past radar maps, as well as simulations of the near future. As he points to a computer screen, he says, "you could be a meteorologist all your life and never see something like this; it would be a disaster of epic proportion…" Every one of these unlikely factors had to be in place at the exact right time for a storm of this magnitude to take shape. This is the defnition of a perfect storm, a term that the flm (and the book it was based on) popularized.32 This scene creates dramatic irony, as the crew are unaware of these developments, even though meteorological computer models were able to predict them. The crew knew they were taking a risk, but they did not know the odds.

This situation recalls Louis Bachelier's original theory of using stochastics to calculate economic risk in Chap. 2, a concept that Fischer Black and Myron Scholes developed into their eponymous model, which transformed fnance in the 1970s, allowing formerly unknown risks to be calculated. These methods obviously cannot predict the future; rather they allow investors to know the range of possible values something might have, even if unpredictable events occur. The little bit of randomness in these equations thus stands-in for the unknown. And in the flm this unknown factor is visually represented by a colossal rogue wave, animated using nonlinear animation. The effects in this flm represented a signifcant advance in fuid simulation. The bulk of the flm takes place at sea in the storm, and while the actors and the boat set were shot in a water tank, the ever-present ocean setting was entirely animated through ILM's OCEAN fuid simulation technology. It is perhaps the best example of the second generation of fuid simulation described in Chap. 3.

The term perfect storm has taken on even more specifc meaning in the decade following the flm's release, as it was used frequently in press stories regarding to the global fnancial crisis of 2008.33 Financial markets are akin to weather in that they are highly unpredictable dynamic systems that certain groups would beneft immensely from understanding better. Like weather disasters, economic disasters are a form of contingency that is important to the average person. The average worker with retirement savings is not unlike the farmer of past centuries, both must contend with the knowledge that their prosperity relies on changes that are beyond their ability to predict. The generalization of this term as a part of common speech suggests that we have begun to think of all sorts of phenomena in terms of nonlinear systemic change.

The flm's focus on the conditions that precipitate unlikely events recalls the National Centre for Supercomputing's 1989 *Visualization of a Numerically Modeled Severe Storm*. This computer visualization, which used an early version of Wavefront software, was so visually compelling it was nominated for an Academy Award.34 The visualization was based on a National Centre for Supercomputing simulation of a devastating weather system in Oklahoma in 1964.35 Scientists took atmospheric conditions from that event and tried to see if they could precipitate similar events in a dynamic simulation. Thus, the simulation they produced was not a model of the actual storm, but a model of a storm that was produced by the same conditions. This is a nuanced ontological distinction, and one that demonstrates nonlinear simulation's complex relationship to time and contingency. *The Perfect Storm* endeavors to represent and animate the past in this way. Cleary it does not carry the evidentiary weight of a government supercomputer simulation, but the flm's spectacular nonlinear waves evoke a sense of past events emerging from historical conditions.

*The Perfect Storm* thus has several references to risk mediated through nonlinear simulation held in tension with each other. Much like *Twister*, the diegetic use of simulations seems to suggest science's ability to make sense of unforeseen catastrophe. The storm is explicable after the fact, and indeed even right before, as a confuence of factors. Had the crew of the Andrea Gail known what meteorologist knew, they could have made a more informed bet, like a contemporary investor. Yet there is also an enduring threat here, that random fgure in the equation, illustrated in the form of the simulated waves, a form of uncertainty that is no less menacing for its simulation. It is not as though mariners or investors live their lives in untroubled security thanks to simulation. Even the quantifable can be terrifying.

Nonlinear images of storms and waves during this period have an aesthetic presence that could only be described as sublime. They are partially obscured by darkness, lit in deep chiaroscuro, allowing only brief glimpses of their totality when lit by lightning strikes. They dwarf all anthropogenic structures around them and swallow characters. The motion of their edges and within their bodies of swirling eddies and vortexes conveys a sense of overwhelming complexity. These moments of visual effects spectacle humble attempts to quantify the storm or predict its behavior. Yet, importantly, it is not the power of nature that is meant to inspire terror, but the power of a nonlinear system. What is more terrifying to the average American than fnancial catastrophe, after all?

Writing about special and visual effects during this period of time, Sean Cubitt fnds these images to be sublime, yet he interprets this effect as one that divorces them from both the meaning of the narrative and reality in general, in a typical postmodern fashion. He describes them as "extra temporal" and "extra historical," yet the threat and pleasure these flms deliver is very much a product of their time and it is vital to their narrative.36 Scott Bukatman fnds that sublime images in special effects are historical in the sense that they convey the "loss of cognitive power experienced by the subject in an increasingly technologized world."37 That sense of the loss of power is conveyed in these flms through overwhelming complexity. And as Bukatman further argues, special effects can deliver the pleasure of mastery in response to this threat.38 This is clear in the conclusion of *Twister*, where the symbolic defeat of the monstrous storms comes in the form of a successful computer model. This is also clear in another example from this era, *The Day After Tomorrow*.

Released just four years after *The Perfect Storm*, *The Day After Tomorrow* takes a similar approach to weather catastrophes, but with the added dimension that it is oriented toward imagining possible futures rather than representing the past. The epistemology of nonlinear simulation lends itself to this sort of temporal fexibly. Just as simulations of past events speculate about how phenomena can emerge from specifc conditions, simulations of the future can speculate about how current conditions might give rise to events that have not yet happened. This ability to predict the future is a key factor driving research into simulation tools.

Along with *An Inconvenient Truth*, *The Day After Tomorrow* is one of the two key flms of the 2000s that sought to promote awareness of anthropogenic global climate change to the public.39 The title of the flm is meant to evoke the widely watched 1983 teleflm *The Day After,* which portrayed a hypothetical global nuclear war and its aftermath. The implication being that climate change is the present generation's version of the global peril of nuclear war.40 Like *The Perfect Storm*, *The Day After Tomorrow* is notable for its use of new fuid simulation tools. Promotional material, especially trailers, heavily featured clips from a sequence where a colossal wave crashes into New York City, engulfng the statue of liberty and carrying freight ships downtown, crashing into buildings. This wave was animated using Digital Domain's FSIM dynamic simulation software. This effect belongs to the same second-generation technology of fuid simulation techniques as *The Perfect Storm.*<sup>41</sup>

*The Day After Tomorrow* imagines a future where anthropogenic changes in global temperatures set off dramatic changes in weather. The weather phenomena portrayed in flm are, however, so dramatic, so rapid and on such a great scale, they detract from verisimilitude. They are frankly ridiculous. Events featured in the flm include the aforementioned wave in New York, which is as high as a skyscraper, a series of tornadoes that tear apart buildings in downtown Los Angeles, and the sudden onset of a new ice age that renders most of the United States uninhabitable. All these events occur in the matter of a few days. The flm follows in the many disaster genre tropes of its time, established by flms like *Independence Day* (1996) and *Armageddon* (1998), where a great deal of visual novelty (and perhaps perverse pleasure) is derived from seeing recognizable landmarks and locations catastrophically destroyed.42 *The Day After Tomorrow* is however noteworthy for the way it poses thematic tropes, conceptual frames, and visual spectacles relating to nonlinear dynamic systems and simulation.

Much like *Twister* and *The Perfect Storm*, *The Day After Tomorrow* features sequences where a scientist explains the causes that lead to these catastrophic events. Indeed, they all feature a strikingly similar scene where a scientist looks at a computer model and predicts the impending disaster, only when it is already too late, of course. *The Day After Tomorrow* additionally features a scene where the protagonist scientist Jack Hall (Dennis Quaid) uses visualizations of his climate simulations displayed on a large video wall to warn a group of world leaders about climate change. He explains that changes in global temperatures could upset the regular fow of air and ocean currents, causing the system to collapse into unpredictable chaos.

The explanation the flm offers of the mechanisms driving the wildly destructive events it features is more detailed in its engagement with nonlinear concepts than the other examples in this chapter. Jack Hall's description of how global air and ocean currents could be destabilized by a change in temperature is, at its core, based on the same principles as real climate change science. The global fow of sea and air currents and their exchange of heat is a complex yet stable system. In the phraseology of Rene Thom, it is "smooth." The introduction of anthropogenic warming threatens to throw this stability into disarray. Just as Lorenz found that small change in input can result in massively different results, such an abrupt change could destabilize the whole system: jet streams would move, dry areas would become wet, warm areas would become hot, and storms would become more intense and frequent. This is one of the key anticipated adverse effects of climate change.43

The feature effect in *The Day After Tomorrow* is a giant nonlinear animated wave. In the flm the menacing, darkly lit wave engulfs the Statue of Liberty and lays waste to New York City amidst a thunderstorm. The wreckage of human endeavor in the flm is reminiscing of romantic sublime landscapes of J.M.W Turner's *Snow Storm: Hannibal and his Army Crossing the Alps* or Philip James De Loutherbourg's *An Avalanche in the Alps,* where imposing, dark, highly detailed clouds descend on tiny human fgures. Like the romantic sublime, this flm inspires humility in the face of a natural world beyond our control. Yet ultimately it also offers reassurances in the face of this looming menace. Although Jack Hall's ability to model climate change ultimately does not save the United States from disaster, the message of the flm is one of the power of models and of the importance of using them, part of the highly didactic aims of this climate change flm.

What does a climate model tell us exactly? What kind of evidence does it provide us? This is a scientifc question, but it is also a question the lay public needed to negotiate. Understanding climate change requires accepting evidence from data models and simulation. Although it may seem that simple data points such as historical ocean temperature recordings are the primary source of our understanding of climate change, these data points are meaningless until climate researchers put them into a model. As Paul Edwards argues, "everything we know about the world's climate – past present, and future- we know through models."44 The flm seems to be taking on a public education role, informing still-skeptical citizens of the evidentiary value of simulations.

*The Day After Tomorrow* uses simulation to represent possible future events, rather than events in the past. It is representing something more abstract than *The Perfect Storm's* Halloween Nor'Easter storm. Digital media and games scholar Mark Wolf has proposed a new way of thinking about truth claims through simulation, which he refers to as "subjunctive documentary." He argues that simulations are not limited to representing what *is* or *was*, rather they can also represent "what could be, would be or might have been."45 Wolf's subjunctive documentary fts halfway inbetween fction and documentary, it is imaginative, but it is also oriented toward understanding reality. Citing Jonathan Crary's Foucauldian study of optical tools, Wolf suggests that in the same way we began to understand reality through the new visibilities created by microscopes and telescopes, our understanding of reality is changing once again through the episteme of simulation. The laws of physics used in computer simulations are simply new "conceptual indices" to reality.46 Clearly Wolf is not thinking about the ludicrous VFX in *The Day After Tomorrow* here, but his ideas gesture toward the epistemic possibilities of thinking through nonlinear simulation. If simulation can create a different kind of documentary, then it must also be able to create a different kind of fction. *The Day After Tomorrow* takes present conditions and imagines the resulting storm, animating the storm with nonlinear animation. So, while the flm offers an image of disaster, it offers a sort of moralistic fable about a possible future if we ignore the models, as the people in the flm do. As the public began to see threats through a nonlinear paradigm, simulation offered an antidote.

#### Unpredictable Embrace: Resilience and Creative Management

The popular discourses concerning nonlinear simulation in the late 1990s and early 2000s that saw unpredictable nonlinearity as a threat that needed to be managed gave way to a much more positive outlook in the following years. Increasingly, chaos has become something to be embraced and directed toward productive ends. This third and fnal era is best represented by animated features. As Chap. 5 noted, nonlinear simulation was a key component in a shift in the history of management that facilitated greater organization resilience. Pixar promotes their own brand of this school of post-Taylorist management that focuses on embracing unpredictably through fexibility. Their brand of animated management is infuenced by the company founder's experience developing nonlinear animation tools. As co-founder and former head of Pixar Ed Catmull writes in his book on the subject, "to my mind randomness is not just inevitable; it is part of the beauty of life … The unpredictable is the ground on which creativity occurs."47

Given that this approach to nonlinearity was fostered at an animation studio, it should come as no surprise that animated features have begun to exhibit this way of thinking. Unlike the VFX disaster spectacles of the late 1990s and early 2000s, or early examples like *The Wrath of Khan*, these examples do not bridge the gap between nonlinear themes and their animated equivalents through diegetic frames of predictive computer models. Instead, animated characters that are imbued with an unpredictable and diffcult-to-divine will of their own embody themes relating to nonlinearity in these flms. These characters are, of course, animated with nonlinear animation.

In Disney's *Moana* (2016), a mysterious ecological disaster threatens the eponymous pacifc island princess's community. Unlike the rest of her village, Moana shares a special kinship with the sea. In a fashback to her childhood, we see her as she toddles close to the waves and a fgure takes shape on the surface of the turquoise water, resembling the alien pseudopod from James Cameron's *The Abyss* (1989). Though the waves ripple and splash with naturalistic unpredictability, the pseudopod bends and gesticulates with the intentionality of an animated character and exhibits as sort of nurturing care for the young Moana. This scene sets up her connection to the ocean and her journey into its unknown expanse to save the village. The people of Moana's village are fearful of traveling into open water, and the narrative arc follows her journey past their islands protective reef. In a key scene she tries to pass the shoreline waves in an outrigger canoe, breathlessly falling beneath the waves before being washed back to shore. The ocean is therefore an ambivalent combination of threat and friend. Over her journey there are several such set pieces involving the ocean. The flm's production included substantial R&D for the development of a new fuid simulation pipeline, including a new solver that Disney Animation dubbed "Splash," which led to at least four SIGGRAPH papers. The fuid FX in this flm was thus an important component of both its production and a promotional demonstration of Disney Animation's technical capabilities.

In the conclusion of the flm, Moana averts the impending ecological disaster threating her people and they return to exploring the seas as their ancestors did. Thus, while the villagers start out fearing the unpredictable power of the oceans, when Moana endeavors beyond the safety of the reef she eventually discovers a harmony with the ocean that leads the village to prosper, with her as their leader. Ed Catmull writes in *Harvard Business Review* that embracing risk can be "downright scary," but he encourages other CEOs to follows his example and build structures that embrace unpredictability.48 While this philosophy of nonlinear animated management has had wide-reaching infuence, perhaps no company has been more infuence by it than Disney Animation. After Disney purchased Pixar, Catmull became head of Disney Animation. The very fuid animation pipeline the flm features is a product of this same philosophy. R&D is a risky venture. It takes time and money, and the return outcome is rarely certain. Yet Pixar has made R&D a key component of their operations, and when Disney purchased the studio, they founded their own dedicated R&D wing, Disney Research. Although Moana's fuid pipeline and Splash solver were not the product of Disney Research but rather part of the production budget for the flm, they are still a product of this same risk-embracing philosophy. The theme of taking risks, embracing unpredictability, and prospering as a result in the flm thus acts in unison with the technical showmanship of the animated ocean in a way that seems to declare Disney animation's adherence to Pixar's corporate philosophy.

The 2019 sequel to Disney's blockbuster hit *Frozen* bares strikingly similar themes and an equally similar scene to *Moana*. Like Moana, Elsa is a monarch attempting to uncover her familial connection to elementary forces and save her community from impending disaster. In her quest for answers Elsa crosses a threatening sea, venturing into the unknown, and like Moana, fails at her frst attempt, nearly drowning. The rendering of the ocean in *Frozen II* is decidedly more threatening than *Moana* though, with dramatic chiaroscuro lighting, a darkened sky, and more menacing, violent waves. Indeed, the ocean is highly reminiscent of the one in *The Perfect Storm*. Elsa looks especially fragile struggling to keep from drowning. Although the ocean in *Moana* is equally as naturalistic, the turquoise color and sunshine make it less threatening, and the quality of the fuid simulation is somehow less severe, more rounded.

As Elsa struggles through the water, she is waylaid by a water horse spirit named Nøkk who fghts her and attempts to drown her by dragging her deep beneath the waves. She eventually lassoes Nøkk though, making it her companion and using it to make the rest of her journey far easier.49 The ocean in *Frozen II* thus turns from initial menace into eventual ally. Elsa's new companion is an emblem of the benefts of embracing nonlinear chance and venturing into the "downright scary" unknown.

Clearly neither of these flms has a scene with visualization on a computer screen. Yet, like the flms before them, they create a thematic connection between nonlinear animation and a discourse of nonlinearity. Instead of diegetic simulations they use voiceless, spirituous characters connected to elemental powers to embody nonlinear themes. These character both initially appear as a menacing force, making nature seem just as implacable and menacing as in *Twister*. But while the mystery of the nonlinear becomes mastered through models in flms form *Twister's* era, here it is befriended. It is an almost literal interpretation of Ed Catmull's philosophy that encourages people to befriend the unpredictable unknown rather than to resist it and avoid it. While this is a way of thinking about nonlinear change that has been spread to innumerable businesses, entrepreneurs, and workers through Catmull's book and his publications in *Harvard Business Review*, it also has specifc meaning in this context. These characters are a declaration of principles for the Catmull-run Disney Animation Studio.

Both Moana and Elsa represent a new regime of management that embraces unpredictable change. It is interesting to note they also represent gendered change within their respective texts. Both replace monarch fathers, and both represent a signifcant shift away from the passive Disney princesses of the past that were defned through their relationship to men. Their embrace of the unknown and of the resilient management style it entails is, at least in these flms, being coded not just as progressive but also as feminine.

Between 1982 and 2019 themes relating to unforeseen and often disastrous events began to increasingly be mediated through nonlinear concepts in feature flms. At the same time, nonlinear animation began to be a more common feature in both VFX and animation. While nonlinear animation does not always represent nonlinear themes like changes in weather, climate, ecology, or economics, in certain texts at key points in time it acts as an emblem that is connected to these subjects. Over this period the thematic meaning of nonlinearity has changed markedly. At frst, in flms like *The Wrath of Khan,* it represented a sort of totalizing vision for the entire universe and the mastery of computation. Soon after, it came to represent the menace of unforeseen events and incalculable risk. In this context it took on a sort of sublime aesthetic quality, inspiring fear, and awe. This is not to say that the reassurance of computational mastery was absent from these flms though. Often some sort of diegetic simulation demonstrated that these seemingly random processes and complex systems could in fact be quantifed. In the fnal and most recent phase, nonlinear animation has persevered some of this menace and continues to represent forces beyond understanding, yet characters' relationship to these forces have changed substantially. Here nonlinear uncertainly is a threat not to abhor or to control but to be at peace with, to embrace. In the case of *Frozen II* it becomes quite literally an animal to be tamed. This is a dramatization of the logic of resilient management and agile development principles profled in Chaps. 4 and 5.

The themes in these flms represent a broad historical episteme that took shape during this period. This is a way of thinking that has spread to virtually every corner of society. An example from civil hydrodynamics demonstrates this shift in a way that resonated with these flms. In the past the conventional way to deal with eroding shoreline was to build bulwarks of seawalls that would stop the incoming waves. This has become a subject of greater importance with anthropogenic climate change. Contemporary research opposes this approach though. Putting up "hard" resistance to the sea does not dissipate the force of waves but instead redirects it, worsening the effects of erosion. Now, specialists in the hydrodynamics of coastal erosion recommend a "soft" approach. Rather than building a wall, for example, they recommend planting mangrove trees to dissipate waves.50 This change in approach is no doubt based on empirical data, but it also represents a shift in paradigms and in research tools. Present policy is based on scientifc nonlinear simulations, and it exhibits the same changing logic as can be observed in these different eras of flms. While live-action disaster flms like *The Perfect Storm* or *The Day After Tomorrow* pit humans against the sea, struggling to escape its menace, animated features like *Moana* and *Frozen II* see characters making peace with the sea, and thriving as a result of it.

#### Notes


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

### Conclusion: Engineering Movies

Cinema has been constructed as being animated by different energies over its history, including spectral haunting, mechanization, and electricity.1 Nonlinear mathematics represents another addition to this list, one aligned with the archeological layer of the past few decades. While early special effects flms like Segundo de Chomón's *The Electric Hotel* (1908) saw electricity brining household items to life so they could speed around the room with a mind of their own, now we see animating liveliness in things like random numbers or dynamic calculations. This form of vitality requires engineering computational apparatuses. This is a different kind of work that entails a different balance between animation and automation than that of manual animation or traditional live-action flm recording.

The nonlinear simulation and animation technologies discussed in the preceding chapters took shape in a specifc historical, institutional, economic, and political context. The Second World and Cold War R&D complexes took decades-old concepts from mathematicians like Henri Poincaré and developed them into simulation technologies. The period that followed, which saw a shift from publicly sponsored R&D to tax-incentivized private R&D, further developed these technologies to specifc media industry ends. As Raymond William's famous critique of technological determinism reminds us, R&D is a place where we can see society infuencing the shape of new media technologies, even before they are put to use.2

Nonlinear animation was but one mode of production in flm industries during this period, and it was a limited, often very compartmentalized, one at that. These new animation tools and practices developed in complex relation to other changes like the rise of R&D within flm production, the emergence of self-styled Silicon-Valley-meets-Hollywood studios, a marked shift toward globalized post-production labor, and the rise of competitive bidding in the VFX industry. Nonlinear animation was shaped by these changes but also fed into them. Buoyed by the strategic and economic value of R&D and the logic of blockbusters, it infuenced not just the way movies were made but also the way studios sought to manage production labor.

Nonlinear animation offers greater control over contingency and materiality than camera and flm. While the appearance of flmed smoke or water can certainly by altered, you cannot really tell it how to move. Nonlinear animation affords that kind of control. At the same time, it is more representationally restrictive than traditional animation, more reliant on a rationalizing, discretizing way of seeing the world. Nonlinear animation seems to suggest that flm studios are thinking even more like managers, investors, and military scientists than ever before. Many uses of nonlinear animation are prime examples of hyperrealism, a term that has been applied to specifc uses other animation techniques like the multiplane camera, Cartesian 3D, and ray-tracing rendering.

Hyperrealism means slightly different things to different scholars, with varying degrees of distance from Baudrillard's original postmodern meaning of the term. It can be a way for flm theorists to identify new digital threats to cinematic realism. For example, Dudley Andrew roundly criticizes the flm *Amélie* (2001) for its seamless digitally retouched, hyperrealistic appearance.3 Conversely, hyperrealism can also be used by animation and digital media scholars to identify forms of moving images that unrefexively use given forms of representation, rather than making use of the representational fexibility of animation. Here the terms "second-order realism" and photorealism are very close to hyperrealism.4

Contrary to the way a traditionalist flm theorist might celebrate cinema's fdelity to reality, animations scholars tend to celebrate a total lack of representational fxedness. Animation's capacity for formless fuidity could perhaps be called the central principle of animation studies. The oft-cited ur-theory of this discourse is Sergei Eisenstein's description of the "plasmatic" nature of early Disney shorts. Though Eisenstein sees animated plasmaticness as an escapist symptom of capitalism, he cannot help but appreciate the anarchic potential of the formless transformation of Disney characters.5 Much contemporary theory follows in this logic. Paul Wells uses *Felix the Cat* cartoons as an example of how animation's capacity to present a topsy-turvy anarchic world has subversive potential.6 Similarly, Norman Klein celebrates animation's ability to create "ani-morphs," images that are suspended in an in-between state of a process of transformation.7 The animation studies canon is full of examples of this kind of formlessness, from Émile Cohl's *Phantasmagoria* (1908) to Ryan Larkin's *Walking* (1969). This way of thinking about animation situates it as a place to question and destabilize the rigid representational form of photographic cinema. A good example of this is animated documentary, which tends to question the authority of objectivity in favor of different, often subjective or experiential, epistemologies. In examples like *Waltz with Bashir* (2008), animators convey the affective quality of the experience of memory and dreams, rather than objective truth.

These commitments dictate the use of the term hyperrealism for animation scholars as a kind of antithesis. For example, Wells uses the term to describe the look of Disney's animated features. The flms that follow in the tradition of *Bambi* (1942) and *Snow White* (1937), which use techniques like the multiplane camera, generally try to mimic the perspective and appearance of photographic flm. The characters do not squash or stretch, or defy physics, they exist in a stable world of rules that mimic our settled ways of seeing.8 For Wells this is a betrayal of the immense potential of animation.

As animation theorists struggled to make sense of the numerical and logical nature of digital animation, hyperrealism proved a useful point of distinction. While Pat Power acknowledges that digital animation's origin in the military-industrial complex and Cartesian single-point perspective cause it to mostly take the shape of hyperrealism, he defends digital animation because individual artists can use these tools for expressive ends.9 Digital animation can escape hyperrealism when it appeals to the realm of "emotion, memory and imagination," when it portrays subjective realism rather than objective realism.10 To Power this is an appropriation, or detournement, of the objective, rationalizing DNA of digital tools.

Some of these critiques sound like they are merely enforcing prescriptive defnitions of different media modes: "animation should be this and live-action cinema should be this and never the two shall meet." There is certainly an element of this thinking in Andrew's work. Yet animation scholars have tended to embrace the uncertain borders of the form, celebrating experimental work by artist like Norman McLaren or Stan Brakhage that does not ft readily into either category. Animation is, of course, a big category that includes a diversity of moving images, including the "super genre" of live-action. And as Karen Redrobe argues in her edited collection on the topic, thinking through binaries like "*continuous* versus *non-continuous*, *narrative* versus *experimental*, *indexical* versus *handmade*, and *animated* versus *live action*" leads to innumerable blind spots.11

Instead, the use of the term hyperrealism by animation scholars identifes a form of representation that is restrictive, un-refexive, and borrowed from somewhere else. These forms of representation can carry ideological baggage along with them through the entanglement of power and knowledge. Cartesian perspective, for example, was famously critiqued by scholars like Jean-Louis Baudry.12 Forms like nonlinear animation and simulated physics borrow from science and the military, sources where the relationship between power and knowledge is especially tight.

This is a critique Lev Manovich makes of certain simulation-based forms of digital animation. In *The Language of New Media* Manovich puts physical simulation into a category of hyperrealism, alongside other second-order realist computer graphics techniques like sophisticated lighting and lens effects.13 He notes the involvement of SIGGRAPH as a reason realism has become such an important concern in computer graphics (when he was writing in 2001). Manovich bases his critique on a precedent set by David Bordwell and Janet Staiger's work on the form of classic Hollywood Cinema. They argue that the society for motion picture and television engineers (SMPTE) "rationally adopted" realism "as an engineering aim."14 Manovich fnds that the US Military and Hollywood have done much the same at SIGGRAPH. The former wanted realism for immersive simulators, and the later wanted it for VFX and animation. He writes,

What determined which particular problems received priority in research? To a large extent, this was determined by the needs of the early sponsors of this research – the Pentagon and Hollywood …. The requirements of military and entertainment applications led researchers to concentrate on the simulation of the particular phenomena of visual reality, such as landscapes and moving fgures.15

Both the military and Hollywood's R&D complexes were seeking a cold, fxed, instrumentalized, realism, the vision of industry and military, rather than artistry. To be clear, this is not a critique of computer graphics entirely. Manovich has written at length about experimental digital media. Instead, he is making a connection between the industries and institutions that develop new tools like physics simulations and the restrictive nature of the forms of representation they offer.

Harun Farocki mobilizes a similar sort of critique of simulation, media industries, and the military in his video series *Parallel I- IV*. In these works, he uses several examples of nonlinear animation, including animations of clouds, waves, and trees. Anselm Franke writes that these images in the *Parallel* series are shown to be a form of "representation (that) seeks to overcome lived reality by constructing, monitoring, and governing it."16 Positioning progressively more technologically sophisticated and photorealistic images in sequence, his flms suggest that the contemporary animation of clouds or trees in cinema and games is not the product of human invention or interpretation, but merely the product of functional technical ends. In this way, he sees these images as an extension of the operational or operative images he has theorizes elsewhere: images not meant to be interpreted or experienced by humans but instead meant to have a functional utility for machines or computation, like the computer vision of a guided missile.17 This operational vision subordinates reality to efforts to manage and control it.

Thomas Elsaesser's interpretation is that the *Parallel* flms see simulated computer graphics as representing the "new invisibility" of contemporary life, referring not to materiality, the way tradition flm does, but instead to the digital transactions, protocols, and ledgers that make up the reality of our contemporary lives. In this sense, the *Parallel* flms construct simulated computer graphics as a form of "post-cinema," as theorized by scholars like Steven Shaviro.18 Yet, while the scholarly discourse of postcinema is generally oriented toward understanding the way digital media are relevant to our lived experience, Elsaesser's interpretation is that the *Parallel* flms instead focus on the way simulated computer graphics alienate us from reality, the way they obfuscation and elide the "harsh materiality and deadly consequences of a world that now lives by the simulated image."19 For Farocki, nonlinear animation is hyperrealism in the worst sense of the word.

This critique of nonlinear animation that interprets it as hyperrealism focuses on the way it promises reality but instead delivers a way of seeing that is in rationalizing, instrumentalizing, alienated from material reality, and deeply rooted in military and industry. This critique is not wrong. This book has offered a plethora of examples as to how R&D complexes simultaneously supported nonlinear animation alongside military projects and management techniques focused on extracting more capital from workers. It has also shown how R&D complexes have extending further into flm industries and flm production over time, displacing and replacing workers through automation and fexible workfows. Nonlinear animation is part of an evolving regime of control that often trades in a rhetoric of liberal freedoms, innovation, and creativity, but which also extends the ability of businesses and institutions to control systems, processes, markets, and workers.

Yet, like other forms of digital animation and like the multiplane camera before them, nonlinear animation does not only create a single type of moving image. Indeed, the way nonlinear animation involves building different mechanisms to drive motion makes it particularly open to alternate forms. As Chap. 4 showed, nonlinear animation incorporates engineering as a fundamental component of animation production. While this might make nonlinear animation seem more rationalizing and technical, this is not necessarily the case. As Chap. 2 argued, making a nonlinear animation requires adopting different assumptions about the mechanisms producing motion. Different assumptions imply different schema for seeing the world, and different apparatuses for producing motion. It is like technicians and artists are constantly reinventing the camera. The use of off-theshelf software minimizes this effect, and these animations will still be processed through other 3D animation schemata of representation like rendering, but the fact of engineering being a part of production means that the representational apparatus will always have a degree of fexibility. To fully appreciate the potential of this, we need to shift the way we think about media and knowledge away from "knowing that" and instead think in terms of the "knowing how" epistemology.

Theoretical discussions of the epistemology of cinema have only ever worked within the category of "knowing that." Cinema is sensory; it presents images and sounds of the world. We have neglected how media technicians represent the world through building apparatuses and experimenting with them – in other words, through "knowing how." Focusing on how nonlinear animation represents the world through "knowing how" requires us to take seriously the contributions of engineers and technical workers as a part of flm production. Indeed, taking this approach not only elucidates the meaning of digital media from the past few decades, but also allows us to recognize practices in flm production that have being going on for over a century. As a practice that entails special technical work, practical special effects offer particularly good examples of "knowing how."

Practical effects are proflmic effects, things like stunts and explosions. Early cinema is full of these types of effects, though they receive a fraction of the attention early visual effects like the "stop trick" receive. For example, in Edwin S. Porter's much discussed flm for Edison *The Great Train Robbery* (1903) there is an explosion that sends futtering currency notes and spectacular rings of undulating smoke into the air. Superfcially, there is nothing cinematic about practical effects such as these. They were clearly used in magic shows and other forms of theatrical entertainment before cinema. But these bits of chaotic material movement are very cinematic. They have the same appeal as the natural motion found in contemporaneous actualities. Georges Sadoul describes early audiences as being most impressed by puffs of smoke or dust clouds in early flms.20 There is no reason such motion in artifcial circumstances should be any less compelling, and these are all examples of what physicists and mathematicians would now label nonlinear phenomena.

Practical effects put natural nonlinear motion from water, smoke, fre, wind, snow, or rain in artifcial, controlled conditions. The practice that offers the clearest parallel to contemporary nonlinear animation is likely the water tank. Starting in the 1910s, Hollywood studios began to feature large water tanks as part of their set repertoires. Famous Players-Lasky frst built a tank for a sinking of the Lusitania scene in Cecil B. DeMille's wartime feature *The Little American* (1917). Then, in 1922 United Artists built a tank for Maurice Tourneur's *Isle of Lost Ships*. Decades later, studios were still investing in bigger and better tanks, such as Toho's "big pool" (1960) and Fox's "Sersen Lake" (1962). Several large tanks from this era continue to be in use, including ones at Pinewood Studios, Cinecittà studios, and a large "horizon tank" on the seashore in Malta.

These tanks featured different hydraulic and mechanical devices for creating waves and large fans for creating wind, and they were designed to accommodate large sets that might simulate the sinking or listing of a ship. Sometimes they would be life sized, but of course there is also a long tradition of scale models and maquettes being used in water tanks. These are essentially fctionalized versions of contemporaneous wind tunnels and hydrodynamic water tanks that were becoming popular for engineering and R&D. They recreate material conditions in miniature. They are simulations.

Such practical effects have themselves been criticized as a kind of hyperrealism. Siegfried Kracauer discusses artifcial snowstorms in this capacity*.* Comparing German and Swedish flm production cultures, he writes that if the Swedes wanted to record a scene with a snowstorm they would go outside, while the Germans, with their highly technical, industrial studio system, would opt to create a fake snowstorm in a giant indoor set.21 The implication here is that the German system is more artifcial, less open to the contingency of reality. This is a trope that resonates strongly with realist strains in flm theory. Yet we might consider the fake snowstorm as a simulation, not in the sense of being a mere artifcial copy, but in the sense of modeling certain aspects of a process, like an aerodynamic model of an airplane in a wind tunnel. This is a way of understanding and representing reality through building apparatuses.

Nonlinear animation is thus part of a long tradition of technical workers making apparatuses to stage moments of material complexity and emergence, and this work represents a way of approaching materiality through "knowing how" rather than flm and animation's traditional "knowing that." To say that these images are so artifcial they make no reference to material reality, or that they "overcome lived reality by constructing, monitoring, and governing it," underestimates the complexity of meaning they are capable of having. Few forms of representation in culture are that easily dismissed. True, nonlinear animation is a product of military-turned-corporate R&D, and it clearly plays an important economic role in contemporary VFX and animation industries. But it can tell us a great deal about the way society and culture are seeking to understand and represent our economic, organizational, and material world. Though examples like the nonlinear animation in *The Day After Tomorrow* or *The Perfect Storm* are risible, they have a lot to tell us about the way culture is negotiating new epistemological frames that are fundamental to understanding important issues like the future of climate change. We have been unpacking the complex meaning of photochemical flm and cinematic conventions in the context of industrial modernity for decades. The tools and images produced by ranks of new technical media laborers in VFX, animation, and game studios warrant the same kind of scrutiny.

#### Notes


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

### Bibliography


© The Author(s) 2021 181

J. Gowanlock, *Animating Unpredictable Effects*, Palgrave Animation, https://doi.org/10.1007/978-3-030-74227-0


———. "The Secret Money Makers of the VFX and Animation Industries." Medium, September 12, 2020. https://medium.com/@jordangowanlock/ the-secret-money-makers-of-the-vfx-and-animation-industries-89d64de385c7.


———. "The Cinema of Attraction: Early Film, Its Spectator, and the Avant-Garde." In *Early Cinema: Space, Frame, Narrative*, edited by Thomas Elsaesser, 56–75. London. BFI, 1990.

———. "Re-Newing Old Technologies: Astonishment, Second Nature, and the Uncanny in Technology from the Previous Turn-of-the-Century." In *Rethinking Media Change: The Aesthetics of Transition*, 39–59. MIT Press, 2003.


———. *Chaos Bound: Orderly Disorder in Contemporary Literature and Science.* Cornell University Press, 1990.


———. "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension." *Science* 156 no. 3775 (1967): 636–638.


Anthonie Meijers, John Woods, Paul Thagard, and Dov M. Gabbay. Elsevier, 2009.


———. "Video Games and Animation." In *The Animation Studies Reader*, by Nichola Dobson, Annabelle Honess Roe, Amy Ratelle, and Caroline Ruddell. Bloomsbury Academic, 2018.


———. "The Science of Fluid Sims." *Fxguide* (blog), September 15, 2011. https://www.fxguide.com/featured/the-science-of-fuid-sims/.


Witzel, Morgen. *A History of Management Thought*. London: Routledge, 2017.


———. "Timespaces in Spectacular Cinema: Crossing the Great Divide of Spectacle versus Narrative." *Screen* 43, no. 4 (December 1, 2002): 370–86.


### Index

#### **A**

Aarseth, Espen, 40 Abel, Robert, 61 Above-the-line work, 109, 125 Academy Awards, 107, 111 Acland, Charles, 71, 75 Advanced Research Projects Agency (ARPA), 35, 58 *Adventures of André & Wally B.* (*The*) (1986), 149 Agile development, 91–94, 96, 166 Alias | Wavefront, 60, 61, 65, 66, 68, 156, 158 Allen, Paul, 70 Amazon Web Service (AWS), 94 Andrew, Dudley, 172 *Antz* (1998), 66 Apple, 8, 56, 65, 76, 89 Arete, 28, 65, 67, 68 *Armageddon* (1998), 160 Association for Computing Machinery (ACM), 2, 5, 8, 36, 57–63 Autodesk, 4, 73, 75, 98, 103 *An Avalanche in the Alps*, 161 *Avatar* (2009), 25, 71, 77

#### **B**

Babbage, Charles, 37 Bachelier, Louis, 22–24, 158 *Bambi* (1942), 173 Barbrook, Richard, 9, 138 *Battleship* (2012), 67 Baudrillard, Jean, 17, 172 Baudry, Jean-Louis, 174 *Beauty of Fractals* (*The*), 151 Beer, Stafford, 128 Belton, John, 10 *Ben-Hur* (1959), 70 Bertalanffy, Ludwig Von, 128 *Bingo* (1998), 61 Bjerknes, Vilhelm, 27 Black-Scholes model, 23, 158 Blockbusters, 58, 69–71 Boeing, 25, 149, 150 Bordwell, David, 88, 174 *Brave* (2012), 119 Bray, John Randolph, 123, 127 Bridson, Robert, 67, 75 Brinkley, Douglas, 54 Brownian motion, 22, 24 Bugaj, Stephan Vladimir, 73

© The Author(s) 2021 199 J. Gowanlock, *Animating Unpredictable Effects*, Palgrave Animation, https://doi.org/10.1007/978-3-030-74227-0

*A Bug's Life* (1998), 135 Bukatman, Scott, 148, 159 Bunge, Mario, 18, 35, 36, 39 Bunker Gilbreth, Frank, 122 Bush, Vannevar, 35, 37 Business Process Management (BPM), 129

#### **C**

Caldwell, John, 100, 109 California Institution of Technology, 60 Californian ideology (The), 138 Cameron, Andy, 9, 138 Carnegie Mellon University, 1 Carpenter, Loren, 25, 86, 125, 149–152 Cartesian perspective, 172–174 Catastrophe theory, 155, 156 Catmull, Ed, 56, 59, 89, 125, 130, 131, 138, 149, 163–165 *Cats* (2020), 96 Cellular automata, 28–30, 43, 126, 136, 152 Ceruzzi, Paul, 37 *Chaos: Making a New Science*, 154 Chaos theory, 27, 149, 154, 156 Charney, Jules, 27 Chun, Wendy, 11 Chung, Hye Jean, 100 Cinecittà studios, 177 Cinesite, 68 Classic Hollywood Cinema, 174 Climate, 147, 155–162, 166, 178 Cloth, 7, 126 Cohl, Emile, 123 Cold War, 31, 33, 52, 54, 55, 88 Computational fuid dynamics (CFD), 28, 63 Computer science, 36–38, 58, 59

Context Free, 25 Conway, John Horton, 30, 32, 43, 152, 153 Cook, Malcolm, 131 Crafton, Donald, 123 *Creativity Inc.*, 130–132, 138 Crogan, Patrick, 31, 33 Csuri, Paul, 59 Cubitt, Sean, 159 Curtin, Michael, 101 Cybernetics, 18, 31

#### **D**

Darley, Andrew, 146 Data collectors, 95 Data wranglers, 95 *Day After Tomorrow* (2004), 66, 147, 160–162, 167, 178 De Chomón, Segundo, 171 DeLanda, Manuel, 42, 43 De Loutherbourg, Philip James, 161 DeMill, Cecil B., 177 Deming, W. Edwards, 134 Digital asset management (DAM), 99 Digital Domain, 4, 28, 60, 66, 68, 72, 76, 78, 160 Digital Nature Tools, 28, 65 Digital Productions, 60, 65 Directability, 28, 66 Disney Animation, 164, 165 Disney Research, 52, 164 Doane, Mary Ann, 3, 11, 148, 155 Documentary, 162, 173 Dreamworks, 60, 68, 72, 75, 78, 107 *Dungeons & Dragons*, 25, 41 Dykstra, John, 150 Dynamation, 65, 156 Dynamic Graphics Project Group, 68 Dynamic simulation, 26–28, 30, 154, 158

#### **E**

Edison, Thomas, 33, 34, 177 Edmonds, Earnest, 2 Edwards, Paul, 32, 33, 162 Eisenhower, Pres. Dwight, 53, 54, 57 Elberse, Anita, 69 *Electric Hotel* (*The*) (1908), 171 Electronic Numerical Integrator And Computer (ENIAC), 23, 27, 38 Elsaesser, Thomas, 9, 175 Em, David, 59 Emeryville, 133, 135 Endorphin (software), 31 Engelbart, Douglas, 58, 60 Euler, Leonhard, 63 Exotic Matter, 28, 67, 75

#### **F**

*Fantasia* (1940), 3 Farocki, Harun, 175 Fedkiw, Ron, 67, 68 *Felix the Cat*, 173 Fermi, Enrico, 23 Financial markets, 27, 157, 158 *Finding Dory* (2016), 126 *Finding Nemo* (2003), 121, 126 FizT (software), 119, 126, 127 *Flag and Waves* (1986), 126 Flaig, Paul, 121, 132, 137 Fleischer Studio, 123, 124, 135 FLU (software), 66 Fluid simulation, 63–68, 75, 105 *Forest Gump* (1994), 17 Foster, Nick, 66 Fournier, Alain, 126 Fox studios, 177 Fractals, 126, 149–154, 156 Frankel, Stan, 22 Frasca, Gonzalo, 40, 41 *Frozen* (2013), 119 *Frozen II* (2019), 147, 165–167

FSIM (software), 28, 66, 68, 160 Fulbright, William, 54, 55, 57 Full Scale Tunnel (FST), 34 Fusion CI Studios, 68, 74, 78, 104 FX artists, 86, 101–103, 105, 106, 109 FX TDs, 101, 103

#### **G**

Game of Life, 30, 32, 43, 152, 153 Game studies, 7, 40, 41 Gantt, Henri, 88, 122 Gaudreault, André, 70 Generative art, 2, 25 *Great Train Robbery* (*The*) (1903), 177 Grieveson, Lee, 56 *Grizzly Man* (2005), 3 *Guardians of The Galaxy* (2014), 94 Gunning, Tom, 70, 146

#### **H**

Hair simulation, 7, 75, 126, 127 Hansen, Mark, 43 Harlow, Francis, 63, 66 *Harvard Business Review*, 89, 131, 135, 138, 164, 165 Hayles, N. Katherine, 145, 152 Herhuth, Eric, 136, 137 High-Dynamic-Range Imaging (HDRI), 95 *Hobbit* (*The*) (2003), 67 Holliday, Christopher, 107 Holling, Crawford Stanley, 129, 131 Hollywood studio system, 88, 100 Houdini (software), 98, 102, 103 *How Walt Disney Cartoons Are Made* (1939), 124 Huizinga's magic circle, 41 Hyperrealism, 17, 172–175, 178

#### **I**

IBM, 38, 59, 60, 62, 134 Imperial College, 28 Implicit Continuous Field Eulerian (ICE), 64 *An Inconvenient Truth* (2006), 145, 160 *Incredibles* (*The*) (2004), 131 *Independence Day* (1996), 160 *Industrial Dynamics*, 129 Industrial Light and Magic, 1, 60, 66–68, 75, 156, 158 Institute of Electrical and Electronics Engineers (IEEE), 2, 58 Intellectual property, 51, 72, 76, 78 *Interstellar* (2015), 31 Iron triangle, 54 *Isle of Lost Ships* (1923), 177

#### **J**

*Jaws* (1975), 69 *Jazz Singer* (*The*) (1927), 70, 146 Jet Propulsion Laboratory (JPL), 60, 68 Jobs, Steve, 3, 72, 89, 131, 138 Joint Computer Conference, 58 *Jurassic Park* (1993), 17, 70, 153, 154

#### **K**

Kass, Michael, 65, 126 Kern, Jon, 91 Klein, Norman, 173 Koch, Gertrud, 41 Kracauer, Siegfried, 178 Krakatoa, 105, 106 Kriegspiel, 24 Kuhn, Annette, 146

#### **L**

Lagrangian Incompressible (LINC), 64 Landreth, Chris, 61 *Language of New Media* (*The*), 174 Larkin, Ryan, 173 Lasseter, John, 92, 135, 137, 138 Latour, Bruno, 97 Leslie, Stuart, 35, 54, 57 Levy, Lawrence, 72, 74, 78, 121, 134 *Life of Pi* (2012), 52, 93 Light detection and ranging (LIDAR), 95 *Little American* (*The*) (1917), 177 Look dev, 85–112 *Lord of the Rings* (*The*) (*2001*), 30 Lorenz, Edward, 27, 154, 157, 161 Los Alamos, 22, 23, 27, 29, 63, 66, 157 Lovelace, Ada, 37 L-systems, 149, 153 Lucas, George, 56, 77 Lucasflm, 60, 107, 125, 126, 152 Lumière brothers, 3 Luxo, 107 Lyotard, Jean-François, 145

#### **M**

Management science, 23, 120, 127–129, 131 Mandel, Ernst, 54 Mandelbrot, Benoit, 151 Manovich, Lev, 37, 174, 175 Massachusetts Institute of Technology (MIT), 28, 34 Matsa, Sam, 59 Matterhorn, 119 Maya, 4, 61, 66, 73, 75, 98, 103, 105, 106 Mayer, Vicki, 109, 111 Mazzucato, Mariana, 8, 56

McCay, Winsor, 123 McCulloch, Richard, 134 McNamara, Robert, 88 Méliès, Georges, 3 MEL script language, 103 Metaxas, Dimitris, 66 Meteorology, 27 Metropolis, Nicholas, 22 Mihailova, Mihaela, 4 Military-industrial-academic complex, 37, 56, 58, 63, 65, 78 Military-industrial complex, 2, 53–55, 78, 89, 149, 173 Miller, Gavin, 65 Miller, Toby, 100 Mirowski, Philip, 31, 152 Mitchell, William John Thomas, 43 *Moana* (2016), 4, 119, 163–165, 167 Monopoly (game), 7, 25 *Monster's Inc*. (2001), 119 Monte Carlo method, 23, 24, 63 Moore School, 38 *Moria*, 25 Mother of all demos, 58, 60 Motion Picture Company (MPC), 94, 96 Multiple Agent Simulation System in Virtual Environment (MASSIVE), 30, 31 Museth, Kenneth, 68 Myers, Robert, 60, 65

#### **N**

Naiad, 28, 67 Nake, Frieder, 2, 25 National Advisory Committee for Aeronautics (NACA), 34, 35 National Research Council, 34 Natural Motion, 30 Navier Stokes equation, 63

Neale, Steven, 70 Nees, George, 2, 25 Neoliberalism, 55, 121, 136, 138 Netfix, 18 Neumann, John Von, 23, 27, 28, 30, 152 New international division of cultural labour, 100 *A New Kind of Science*, 152 New York Institute of Technology (NYIT), 59 Noble, Safya, 138 Nordenstam, Marcus, 67 North, Dan, 70, 146 Nuke (software), 105, 106 Numerically Modeled Severe Storm, 158

#### **O**

Offce of Scientifc Research and Development (OSRD), 34, 35 O'Neil, Cathy, 138

#### **P**

Pacifc Data Images (PDI), 60, 61 Pallant, Chris, 41 *Parallel I-IV* (2012-2014), 175 Particle in Cell (PIC), 64 Particle systems, 105, 126 Peirce, Charles S., 19 *Perfect Storm* (*The*) (2000), 66, 155–162, 167, 178 Perfect storms, 158 *Phantasmagoria* (1908), 173 PhysBAM, 68 Pinewood Studios, 177 *Pinocchio* (1940), 3 Pipelines, 97–101, 103 Pipeline TD, 98 *Piper* (2016), 108, 109

Pixar, 3, 25, 72, 74, 89, 92, 108, 109, 119–122, 124–127, 130–139 *Pixar Studios Stories*, 133 Plasmaticness, 172 Plug-ins, 102, 103 Poincare, Henri, 26, 27 Point Richmond, 132, 133 Popper, Karl, 56 *Poseidon* (2006), 67 Poseidon Research, 60, 65 Post-cinema, 175 Post-Fordism, 87, 100, 111, 121, 130, 134 Power, Pat, 173 Prince, Stephen, 10 *Principles of Scientifc Management* (*The*), 122 Product development, 89, 90 Production workfows, 86, 89, 92–97, 100 Project management, 88, 89 Project Management Institute, 88 Purse, Lisa, 147, 148 Python, 103

#### **R**

RAND Corporation, 29 Random walk, 22 Ray Smith, Alvy, 3, 60, 125, 149 Raytheon, 35 Realfow, 78, 103–105 Redrobe, Karen, 174 Reeves, William T., 26, 60, 125, 126, 151 Regelous, Stephen, 30 Rehak, Bob, 147 Reiswitz, Georg, 24 Renderman, 72, 94 *Repas De Bébé* (*Le*) (1895), 3 Rhythm and Hues, 60, 66, 93, 110 *Ryan* (2004), 61

#### **S**

Sack, Warren, 152 Sadoul, Georges, 3, 177 Sammond, Nicholas, 123 Santa Fe Institute, 29 Schelling, Thomas C., 29 Schriever, Bernard, 88 Schumpeter, Joseph, 130 Scientifc management, 122–125, 128 Scripts, 86, 98, 103 Second World War, 22, 35, 53, 54 Securities and Exchange Commission, 72, 76 Seidel, Stefan, 90 Seymour, Mike, 76 Silicon Graphics, 65, 72, 73 Silicon Valley, 8, 9, 53, 56, 62, 78, 91, 121, 130, 131 Simon, Herbert, 18, 36, 39 Sito, Tom, 62 Sketchpad, 58 Skywalker Ranch, 56 Smoke simulation, 28, 126, 172, 177 Smorganic, 104 *Snow Storm: Hannibal and his Army Crossing the Alps*, 161 *Snow White* (1937), 124, 173 Sobchack, Vivian, 3, 137 Software as a service, 92, 96 Software crisis, 90 Software development, 90, 91 *Sonic the Hedgehog* (2020), 96 Sony Pictures Animation, 51 Sony Pictures Imageworks, 18 Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH), 36, 57–63, 65–67, 126, 149, 150, 174 Spectacle, 146, 147, 153 *Spider-Man: Into the Spider-Verse* (2018), 51

Splash (software), 119 Staiger, Janet, 88, 174 Stam, Jos, 66, 68 Stanford University, 1, 2, 34, 57, 65, 67, 68, 128 *Star Trek II*, 25, 61, 86, 125, 149–153 Stasiuk, Mark, 68, 74, 78 Stengers, Isabelle, 43 Steve Jobs building, 135–137 Stochastic simulation, 20–26, 128 STORM (software), 28, 68 Strother, Sandy, 3 Subcontracting, 136 Sublime (The), 159, 161, 166 Sugarscape, 29 *Superman Returns* (2006), 67 SuperPaint, 59 Sutherland, Ivan, 58, 59

#### **T**

Tax incentives, 52, 55, 69, 93, 100, 111, 171 Taylor, Fredrick, 88, 122 Technical animation, 6–9, 101 *Ten Commandments* (*The*) (1956), 70 Tessendorf, Jerry, 67, 68, 110 *Theory of Film*, 178 Theory of managing creativityintensive processes (TMCP), 90 Thom, Rene, 156, 161 Thompson, Kristin, 88 Three-body problem, 26 *Titanic* (1997), 65 Toho studios, 177 Total quality management, 132, 134–136 Tourneur, Maurice, 177 Toyota Production System (TPS), 89 *Toy Story* (1995), 17, 133 *Toy Story 2* (1999), 92, 99, 133 *Toy Story 3* (2010), 133

*Tron* (1982), 151 T3 Group, 27, 63 Turing, Alan, 37 Turner, J.M.W., 161 Turnock, Julie, 10 Turnover dates, 93 *Twister* (1996), 155, 157, 159, 161 *2010* (1984), 65

#### **U**

Ulam, Stanislaw, 22 United States Air Force, 27, 38 Universal Automatic Computer (UNIVAC), 38 University of Toronto, 57, 68, 100 Upson, Craig, 60 Uricchio, William, 41

#### **V**

Van Dam, Andy, 59 Vanderhoef, John, 101 *VFX Insider*, 76 VFX producer, 93, 94 VFX supervisor, 93–95, 110 Vincenti, Walter, 18, 35, 39 Visual Effects Society (The), 73, 107 *Vol Libre* (1980), 149, 151

#### **W**

*Walking* (1969), 173 Walt Disney Productions, 124, 127, 135, 137 *Waltz with Bashir* (2008), 173 Wasson, Haidee, 56 Waterfall development, 91 Water tank, 158, 177 *Waterworld* (1995), 65 Weather, 27, 155, 157

Welch, Eddie, 66 Weta Digital, 30 Whissel, Kristen, 147, 153 Whitney, John, 59 Williams, Raymond, 9, 33, 58 Wind tunnel, 34, 36, 178 Winsberg, Eric, 19 *Wired*, 31, 127, 138 Witzel, Morgan, 128 Wolf, Mark, 162 Wolfram, Steven, 152 Wood, Aylish, 4, 147 *Works* (*The*), 59 Wright Forrester, Jay, 129

#### **X**

Xerox, 59, 60 Xgen, 75

#### **Y**

Yaeger, Larry, 60, 65

#### **Z**

Zeeman Erik, Christopher, 156 Zero VFX, 94 Zielinski, Siegfried, 121 Zync, 94